Test Report: QEMU_macOS 19648

                    
                      5a5b9bbbb8805a9ff40b088174fcc86278d72994:2024-09-15:36226
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.85
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.25
22 TestOffline 10.07
33 TestAddons/parallel/Registry 71.3
46 TestCertOptions 10.13
47 TestCertExpiration 195.28
48 TestDockerFlags 10.37
49 TestForceSystemdFlag 10.4
50 TestForceSystemdEnv 12.07
95 TestFunctional/parallel/ServiceCmdConnect 40.44
167 TestMultiControlPlane/serial/StopSecondaryNode 214.12
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 102.76
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.73
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.07
175 TestMultiControlPlane/serial/RestartCluster 5.25
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 9.97
184 TestJSONOutput/start/Command 9.94
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.14
216 TestMountStart/serial/StartWithMountFirst 9.95
219 TestMultiNode/serial/FreshStart2Nodes 10
220 TestMultiNode/serial/DeployApp2Nodes 110.91
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 38.41
228 TestMultiNode/serial/RestartKeepsNodes 7.13
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 3.56
231 TestMultiNode/serial/RestartMultiNode 5.24
232 TestMultiNode/serial/ValidateNameConflict 20.09
236 TestPreload 10.07
238 TestScheduledStopUnix 10.07
239 TestSkaffold 12.68
242 TestRunningBinaryUpgrade 588.43
244 TestKubernetesUpgrade 18.5
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.07
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.82
260 TestStoppedBinaryUpgrade/Upgrade 573.95
262 TestPause/serial/Start 9.85
272 TestNoKubernetes/serial/StartWithK8s 9.81
273 TestNoKubernetes/serial/StartWithStopK8s 5.3
274 TestNoKubernetes/serial/Start 5.29
278 TestNoKubernetes/serial/StartNoArgs 5.31
280 TestNetworkPlugins/group/auto/Start 9.98
281 TestNetworkPlugins/group/kindnet/Start 9.92
282 TestNetworkPlugins/group/flannel/Start 9.87
283 TestNetworkPlugins/group/enable-default-cni/Start 9.8
284 TestNetworkPlugins/group/bridge/Start 9.85
285 TestNetworkPlugins/group/kubenet/Start 9.98
286 TestNetworkPlugins/group/custom-flannel/Start 9.9
287 TestNetworkPlugins/group/calico/Start 9.88
288 TestNetworkPlugins/group/false/Start 9.86
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.96
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.22
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 10.01
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.24
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/no-preload/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 10.14
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 12.08
316 TestStartStop/group/embed-certs/serial/DeployApp 0.1
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/embed-certs/serial/SecondStart 5.25
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.45
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 10.15
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
340 TestStartStop/group/newest-cni/serial/SecondStart 5.24
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (14.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-011000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-011000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.851652083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6c249997-67a1-437a-8f39-046960ec8a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-011000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b10b8d61-2cb6-45e9-aeb3-31467f808fe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"3c144497-ec90-4adf-8332-88750c22ced9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig"}}
	{"specversion":"1.0","id":"855ef9e8-fe9b-4d0a-a982-46b51f3674f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"30e221f7-610f-431c-b71f-b801500b3d58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b12833ac-0353-4b10-a5be-7827f64c3c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube"}}
	{"specversion":"1.0","id":"d1ec2240-8612-4a1b-a5ea-63bd7a47dd28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"8ce91e0f-00f8-40d2-ae7c-122eab567cc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"55545c8f-e7f1-426c-bc83-bcf88e9de1ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c814ce74-4445-40a9-b3bd-31bb46430508","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2144de0d-b775-4d6f-a2ad-e0e60642c573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-011000\" primary control-plane node in \"download-only-011000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c40b135f-e4bc-4fb5-b311-5bf73306f997","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7efa3ce2-6070-40e2-a54f-5d199a1f28c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0] Decompressors:map[bz2:0x14000815ae0 gz:0x14000815ae8 tar:0x14000815a50 tar.bz2:0x14000815a60 tar.gz:0x14000815a70 tar.xz:0x14000815aa0 tar.zst:0x14000815ad0 tbz2:0x14000815a60 tgz:0x14
000815a70 txz:0x14000815aa0 tzst:0x14000815ad0 xz:0x14000815b00 zip:0x14000815b10 zst:0x14000815b08] Getters:map[file:0x14000065f00 http:0x14000bd8370 https:0x14000bd83c0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"42b33e50-26c8-49d5-be71-791a39fbf210","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 10:55:34.212043    2176 out.go:345] Setting OutFile to fd 1 ...
	I0915 10:55:34.212177    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:34.212181    2176 out.go:358] Setting ErrFile to fd 2...
	I0915 10:55:34.212183    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:34.212316    2176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	W0915 10:55:34.212404    2176 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19648-1650/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19648-1650/.minikube/config/config.json: no such file or directory
	I0915 10:55:34.213709    2176 out.go:352] Setting JSON to true
	I0915 10:55:34.231055    2176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1497,"bootTime":1726421437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 10:55:34.231141    2176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 10:55:34.236630    2176 out.go:97] [download-only-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 10:55:34.236774    2176 notify.go:220] Checking for updates...
	W0915 10:55:34.236847    2176 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 10:55:34.240556    2176 out.go:169] MINIKUBE_LOCATION=19648
	I0915 10:55:34.249651    2176 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:55:34.252562    2176 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 10:55:34.256653    2176 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 10:55:34.259676    2176 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	W0915 10:55:34.265597    2176 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 10:55:34.265808    2176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 10:55:34.270755    2176 out.go:97] Using the qemu2 driver based on user configuration
	I0915 10:55:34.270776    2176 start.go:297] selected driver: qemu2
	I0915 10:55:34.270792    2176 start.go:901] validating driver "qemu2" against <nil>
	I0915 10:55:34.270869    2176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 10:55:34.272480    2176 out.go:169] Automatically selected the socket_vmnet network
	I0915 10:55:34.278320    2176 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0915 10:55:34.278412    2176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 10:55:34.278466    2176 cni.go:84] Creating CNI manager for ""
	I0915 10:55:34.278510    2176 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 10:55:34.278556    2176 start.go:340] cluster config:
	{Name:download-only-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:55:34.283782    2176 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 10:55:34.287717    2176 out.go:97] Downloading VM boot image ...
	I0915 10:55:34.287733    2176 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso
	I0915 10:55:41.114368    2176 out.go:97] Starting "download-only-011000" primary control-plane node in "download-only-011000" cluster
	I0915 10:55:41.114407    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:41.174141    2176 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 10:55:41.174163    2176 cache.go:56] Caching tarball of preloaded images
	I0915 10:55:41.174327    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:41.178499    2176 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 10:55:41.178505    2176 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:41.256964    2176 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 10:55:47.706333    2176 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:47.706488    2176 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:48.402640    2176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 10:55:48.402850    2176 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/download-only-011000/config.json ...
	I0915 10:55:48.402870    2176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/download-only-011000/config.json: {Name:mk0f71c4e23cea7aa16097fd110f28e477dbb5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:55:48.403103    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:48.403298    2176 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0915 10:55:48.982838    2176 out.go:193] 
	W0915 10:55:48.989031    2176 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0] Decompressors:map[bz2:0x14000815ae0 gz:0x14000815ae8 tar:0x14000815a50 tar.bz2:0x14000815a60 tar.gz:0x14000815a70 tar.xz:0x14000815aa0 tar.zst:0x14000815ad0 tbz2:0x14000815a60 tgz:0x14000815a70 txz:0x14000815aa0 tzst:0x14000815ad0 xz:0x14000815b00 zip:0x14000815b10 zst:0x14000815b08] Getters:map[file:0x14000065f00 http:0x14000bd8370 https:0x14000bd83c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0915 10:55:48.989056    2176 out_reason.go:110] 
	W0915 10:55:48.999837    2176 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 10:55:49.003795    2176 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-011000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.25s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-208000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-208000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 : exit status 40 (151.150542ms)

                                                
                                                
-- stdout --
	* [binary-mirror-208000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-208000" primary control-plane node in "binary-mirror-208000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 10:55:56.770587    2244 out.go:345] Setting OutFile to fd 1 ...
	I0915 10:55:56.770726    2244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:56.770730    2244 out.go:358] Setting ErrFile to fd 2...
	I0915 10:55:56.770733    2244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:56.770875    2244 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 10:55:56.771958    2244 out.go:352] Setting JSON to false
	I0915 10:55:56.787887    2244 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1519,"bootTime":1726421437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 10:55:56.787956    2244 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 10:55:56.793144    2244 out.go:177] * [binary-mirror-208000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 10:55:56.800135    2244 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 10:55:56.800197    2244 notify.go:220] Checking for updates...
	I0915 10:55:56.807027    2244 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:55:56.810103    2244 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 10:55:56.813150    2244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 10:55:56.816121    2244 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 10:55:56.819243    2244 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 10:55:56.823073    2244 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 10:55:56.830071    2244 start.go:297] selected driver: qemu2
	I0915 10:55:56.830079    2244 start.go:901] validating driver "qemu2" against <nil>
	I0915 10:55:56.830148    2244 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 10:55:56.833076    2244 out.go:177] * Automatically selected the socket_vmnet network
	I0915 10:55:56.836581    2244 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0915 10:55:56.836680    2244 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 10:55:56.836701    2244 cni.go:84] Creating CNI manager for ""
	I0915 10:55:56.836731    2244 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 10:55:56.836740    2244 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 10:55:56.836792    2244 start.go:340] cluster config:
	{Name:binary-mirror-208000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-208000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49314 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:55:56.840338    2244 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 10:55:56.848102    2244 out.go:177] * Starting "binary-mirror-208000" primary control-plane node in "binary-mirror-208000" cluster
	I0915 10:55:56.852119    2244 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:55:56.852135    2244 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 10:55:56.852147    2244 cache.go:56] Caching tarball of preloaded images
	I0915 10:55:56.852209    2244 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 10:55:56.852215    2244 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 10:55:56.852423    2244 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/binary-mirror-208000/config.json ...
	I0915 10:55:56.852433    2244 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/binary-mirror-208000/config.json: {Name:mk2b204e01a6f7339bdc65b228aac4373f7e1191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:55:56.852801    2244 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:55:56.852858    2244 download.go:107] Downloading: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0915 10:55:56.870167    2244 out.go:201] 
	W0915 10:55:56.874166    2244 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0] Decompressors:map[bz2:0x1400012b6c0 gz:0x1400012b6c8 tar:0x1400012b670 tar.bz2:0x1400012b680 tar.gz:0x1400012b690 tar.xz:0x1400012b6a0 tar.zst:0x1400012b6b0 tbz2:0x1400012b680 tgz:0x1400012b690 txz:0x1400012b6a0 tzst:0x1400012b6b0 xz:0x1400012b6d0 zip:0x1400012b6e0 zst:0x1400012b6d8] Getters:map[file:0x140013ce320 http:0x14000898190 https:0x140008981e0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0 0x108be97a0] Decompressors:map[bz2:0x1400012b6c0 gz:0x1400012b6c8 tar:0x1400012b670 tar.bz2:0x1400012b680 tar.gz:0x1400012b690 tar.xz:0x1400012b6a0 tar.zst:0x1400012b6b0 tbz2:0x1400012b680 tgz:0x1400012b690 txz:0x1400012b6a0 tzst:0x1400012b6b0 xz:0x1400012b6d0 zip:0x1400012b6e0 zst:0x1400012b6d8] Getters:map[file:0x140013ce320 http:0x14000898190 https:0x140008981e0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0915 10:55:56.874175    2244 out.go:270] * 
	* 
	W0915 10:55:56.874606    2244 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 10:55:56.882095    2244 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-208000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49314" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-208000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-208000
--- FAIL: TestBinaryMirror (0.25s)

                                                
                                    
x
+
TestOffline (10.07s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-585000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-585000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.912499125s)

                                                
                                                
-- stdout --
	* [offline-docker-585000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-585000" primary control-plane node in "offline-docker-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:41:29.938551    4986 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:41:29.938751    4986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:29.938754    4986 out.go:358] Setting ErrFile to fd 2...
	I0915 11:41:29.938756    4986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:29.938869    4986 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:41:29.940294    4986 out.go:352] Setting JSON to false
	I0915 11:41:29.958090    4986 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4252,"bootTime":1726421437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:41:29.958165    4986 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:41:29.962695    4986 out.go:177] * [offline-docker-585000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:41:29.970408    4986 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:41:29.970400    4986 notify.go:220] Checking for updates...
	I0915 11:41:29.974494    4986 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:41:29.977541    4986 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:41:29.979126    4986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:41:29.981572    4986 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:41:29.984567    4986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:41:29.987940    4986 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:29.987995    4986 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:41:29.991539    4986 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:41:29.998550    4986 start.go:297] selected driver: qemu2
	I0915 11:41:29.998560    4986 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:41:29.998567    4986 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:41:30.000465    4986 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:41:30.003456    4986 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:41:30.006706    4986 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:41:30.006727    4986 cni.go:84] Creating CNI manager for ""
	I0915 11:41:30.006752    4986 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:41:30.006760    4986 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:41:30.006798    4986 start.go:340] cluster config:
	{Name:offline-docker-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:41:30.010575    4986 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:41:30.017492    4986 out.go:177] * Starting "offline-docker-585000" primary control-plane node in "offline-docker-585000" cluster
	I0915 11:41:30.021535    4986 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:41:30.021555    4986 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:41:30.021565    4986 cache.go:56] Caching tarball of preloaded images
	I0915 11:41:30.021641    4986 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:41:30.021646    4986 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:41:30.021709    4986 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/offline-docker-585000/config.json ...
	I0915 11:41:30.021720    4986 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/offline-docker-585000/config.json: {Name:mkdedc90d0c1acd3337808509c331c47914bc1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:41:30.022020    4986 start.go:360] acquireMachinesLock for offline-docker-585000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:30.022054    4986 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "offline-docker-585000"
	I0915 11:41:30.022070    4986 start.go:93] Provisioning new machine with config: &{Name:offline-docker-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:30.022094    4986 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:30.026537    4986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:30.042383    4986 start.go:159] libmachine.API.Create for "offline-docker-585000" (driver="qemu2")
	I0915 11:41:30.042416    4986 client.go:168] LocalClient.Create starting
	I0915 11:41:30.042505    4986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:30.042534    4986 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:30.042542    4986 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:30.042585    4986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:30.042608    4986 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:30.042617    4986 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:30.042993    4986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:30.202074    4986 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:30.309541    4986 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:30.309555    4986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:30.309751    4986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:30.319867    4986 main.go:141] libmachine: STDOUT: 
	I0915 11:41:30.319893    4986 main.go:141] libmachine: STDERR: 
	I0915 11:41:30.319973    4986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2 +20000M
	I0915 11:41:30.328901    4986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:30.328922    4986 main.go:141] libmachine: STDERR: 
	I0915 11:41:30.328941    4986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:30.328946    4986 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:30.328957    4986 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:30.328991    4986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ac:0d:7a:60:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:30.330742    4986 main.go:141] libmachine: STDOUT: 
	I0915 11:41:30.330759    4986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:30.330781    4986 client.go:171] duration metric: took 288.360875ms to LocalClient.Create
	I0915 11:41:32.332876    4986 start.go:128] duration metric: took 2.310797334s to createHost
	I0915 11:41:32.332919    4986 start.go:83] releasing machines lock for "offline-docker-585000", held for 2.310889208s
	W0915 11:41:32.332932    4986 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:32.338146    4986 out.go:177] * Deleting "offline-docker-585000" in qemu2 ...
	W0915 11:41:32.359324    4986 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:32.359349    4986 start.go:729] Will try again in 5 seconds ...
	I0915 11:41:37.361354    4986 start.go:360] acquireMachinesLock for offline-docker-585000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:37.361461    4986 start.go:364] duration metric: took 86.208µs to acquireMachinesLock for "offline-docker-585000"
	I0915 11:41:37.361487    4986 start.go:93] Provisioning new machine with config: &{Name:offline-docker-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:37.361557    4986 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:37.371762    4986 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:37.387510    4986 start.go:159] libmachine.API.Create for "offline-docker-585000" (driver="qemu2")
	I0915 11:41:37.387539    4986 client.go:168] LocalClient.Create starting
	I0915 11:41:37.387609    4986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:37.387640    4986 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:37.387649    4986 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:37.387681    4986 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:37.387704    4986 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:37.387712    4986 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:37.387991    4986 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:37.542433    4986 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:37.755867    4986 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:37.755878    4986 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:37.756095    4986 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:37.766283    4986 main.go:141] libmachine: STDOUT: 
	I0915 11:41:37.766317    4986 main.go:141] libmachine: STDERR: 
	I0915 11:41:37.766398    4986 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2 +20000M
	I0915 11:41:37.775635    4986 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:37.775657    4986 main.go:141] libmachine: STDERR: 
	I0915 11:41:37.775678    4986 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:37.775682    4986 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:37.775693    4986 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:37.775721    4986 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:c6:c9:f5:c9:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/offline-docker-585000/disk.qcow2
	I0915 11:41:37.777668    4986 main.go:141] libmachine: STDOUT: 
	I0915 11:41:37.777686    4986 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:37.777701    4986 client.go:171] duration metric: took 390.161458ms to LocalClient.Create
	I0915 11:41:39.779889    4986 start.go:128] duration metric: took 2.418331208s to createHost
	I0915 11:41:39.779987    4986 start.go:83] releasing machines lock for "offline-docker-585000", held for 2.418545833s
	W0915 11:41:39.780379    4986 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:39.790752    4986 out.go:201] 
	W0915 11:41:39.794875    4986 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:41:39.794949    4986 out.go:270] * 
	* 
	W0915 11:41:39.797660    4986 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:41:39.806816    4986 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-585000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-15 11:41:39.822906 -0700 PDT m=+2765.671997542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-585000 -n offline-docker-585000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-585000 -n offline-docker-585000: exit status 7 (69.069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-585000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-585000
--- FAIL: TestOffline (10.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.202125ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-66pdh" [95e9f23d-5878-4962-aaec-4a917383b9a2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011957583s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7jd6c" [07c364fc-3808-4c42-a919-efc8d8fd3ddc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009484875s
addons_test.go:342: (dbg) Run:  kubectl --context addons-620000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-620000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-620000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.062572167s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-620000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 ip
2024/09/15 11:09:03 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-620000 -n addons-620000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | -p download-only-011000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| delete  | -p download-only-011000              | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| start   | -o=json --download-only              | download-only-082000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | -p download-only-082000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| delete  | -p download-only-082000              | download-only-082000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| delete  | -p download-only-011000              | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| delete  | -p download-only-082000              | download-only-082000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| start   | --download-only -p                   | binary-mirror-208000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | binary-mirror-208000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49314               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-208000              | binary-mirror-208000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| addons  | disable dashboard -p                 | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | addons-620000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | addons-620000                        |                      |         |         |                     |                     |
	| start   | -p addons-620000 --wait=true         | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:59 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-620000 addons disable         | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 10:59 PDT | 15 Sep 24 10:59 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-620000 addons                 | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:08 PDT | 15 Sep 24 11:08 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-620000 addons                 | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:08 PDT | 15 Sep 24 11:08 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-620000 addons                 | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:08 PDT | 15 Sep 24 11:08 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:09 PDT |                     |
	|         | addons-620000                        |                      |         |         |                     |                     |
	| ip      | addons-620000 ip                     | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:09 PDT | 15 Sep 24 11:09 PDT |
	| addons  | addons-620000 addons disable         | addons-620000        | jenkins | v1.34.0 | 15 Sep 24 11:09 PDT | 15 Sep 24 11:09 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 10:55:57
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 10:55:57.054495    2258 out.go:345] Setting OutFile to fd 1 ...
	I0915 10:55:57.054637    2258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:57.054640    2258 out.go:358] Setting ErrFile to fd 2...
	I0915 10:55:57.054642    2258 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:57.054764    2258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 10:55:57.055908    2258 out.go:352] Setting JSON to false
	I0915 10:55:57.072083    2258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1520,"bootTime":1726421437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 10:55:57.072148    2258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 10:55:57.077170    2258 out.go:177] * [addons-620000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 10:55:57.084165    2258 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 10:55:57.084208    2258 notify.go:220] Checking for updates...
	I0915 10:55:57.091164    2258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:55:57.094123    2258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 10:55:57.097128    2258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 10:55:57.099993    2258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 10:55:57.103117    2258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 10:55:57.106238    2258 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 10:55:57.109090    2258 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 10:55:57.116103    2258 start.go:297] selected driver: qemu2
	I0915 10:55:57.116110    2258 start.go:901] validating driver "qemu2" against <nil>
	I0915 10:55:57.116122    2258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 10:55:57.118562    2258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 10:55:57.120182    2258 out.go:177] * Automatically selected the socket_vmnet network
	I0915 10:55:57.123185    2258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 10:55:57.123199    2258 cni.go:84] Creating CNI manager for ""
	I0915 10:55:57.123225    2258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 10:55:57.123232    2258 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 10:55:57.123267    2258 start.go:340] cluster config:
	{Name:addons-620000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:55:57.127120    2258 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 10:55:57.136059    2258 out.go:177] * Starting "addons-620000" primary control-plane node in "addons-620000" cluster
	I0915 10:55:57.140107    2258 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:55:57.140122    2258 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 10:55:57.140128    2258 cache.go:56] Caching tarball of preloaded images
	I0915 10:55:57.140194    2258 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 10:55:57.140200    2258 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 10:55:57.140409    2258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/config.json ...
	I0915 10:55:57.140421    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/config.json: {Name:mk124399d67406e083b3d4b9027751ab945d9658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:55:57.140857    2258 start.go:360] acquireMachinesLock for addons-620000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 10:55:57.140923    2258 start.go:364] duration metric: took 60.083µs to acquireMachinesLock for "addons-620000"
	I0915 10:55:57.140937    2258 start.go:93] Provisioning new machine with config: &{Name:addons-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 10:55:57.140964    2258 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 10:55:57.148123    2258 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0915 10:55:57.383873    2258 start.go:159] libmachine.API.Create for "addons-620000" (driver="qemu2")
	I0915 10:55:57.383928    2258 client.go:168] LocalClient.Create starting
	I0915 10:55:57.384128    2258 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 10:55:57.472327    2258 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 10:55:57.634707    2258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 10:55:57.917118    2258 main.go:141] libmachine: Creating SSH key...
	I0915 10:55:57.967018    2258 main.go:141] libmachine: Creating Disk image...
	I0915 10:55:57.967023    2258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 10:55:57.967271    2258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2
	I0915 10:55:57.986096    2258 main.go:141] libmachine: STDOUT: 
	I0915 10:55:57.986120    2258 main.go:141] libmachine: STDERR: 
	I0915 10:55:57.986183    2258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2 +20000M
	I0915 10:55:57.994123    2258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 10:55:57.994138    2258 main.go:141] libmachine: STDERR: 
	I0915 10:55:57.994151    2258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2
	I0915 10:55:57.994157    2258 main.go:141] libmachine: Starting QEMU VM...
	I0915 10:55:57.994193    2258 qemu.go:418] Using hvf for hardware acceleration
	I0915 10:55:57.994221    2258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:24:74:47:e5:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/disk.qcow2
	I0915 10:55:58.052378    2258 main.go:141] libmachine: STDOUT: 
	I0915 10:55:58.052423    2258 main.go:141] libmachine: STDERR: 
	I0915 10:55:58.052428    2258 main.go:141] libmachine: Attempt 0
	I0915 10:55:58.052441    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:55:58.052495    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:55:58.052514    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:00.054643    2258 main.go:141] libmachine: Attempt 1
	I0915 10:56:00.054719    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:00.055138    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:00.055189    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:02.056564    2258 main.go:141] libmachine: Attempt 2
	I0915 10:56:02.056957    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:02.057239    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:02.057291    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:04.057670    2258 main.go:141] libmachine: Attempt 3
	I0915 10:56:04.057703    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:04.057810    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:04.057824    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:06.059801    2258 main.go:141] libmachine: Attempt 4
	I0915 10:56:06.059812    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:06.059851    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:06.059857    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:08.061838    2258 main.go:141] libmachine: Attempt 5
	I0915 10:56:08.061844    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:08.061887    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:08.061893    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:10.062021    2258 main.go:141] libmachine: Attempt 6
	I0915 10:56:10.062047    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:10.062131    2258 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0915 10:56:10.062141    2258 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e87109}
	I0915 10:56:12.062327    2258 main.go:141] libmachine: Attempt 7
	I0915 10:56:12.062348    2258 main.go:141] libmachine: Searching for 86:24:74:47:e5:d0 in /var/db/dhcpd_leases ...
	I0915 10:56:12.062470    2258 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0915 10:56:12.062494    2258 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:86:24:74:47:e5:d0 ID:1,86:24:74:47:e5:d0 Lease:0x66e8713a}
	I0915 10:56:12.062497    2258 main.go:141] libmachine: Found match: 86:24:74:47:e5:d0
	I0915 10:56:12.062508    2258 main.go:141] libmachine: IP: 192.168.105.2
	I0915 10:56:12.062512    2258 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0915 10:56:14.083566    2258 machine.go:93] provisionDockerMachine start ...
	I0915 10:56:14.084445    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.084991    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.085011    2258 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 10:56:14.158952    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 10:56:14.158981    2258 buildroot.go:166] provisioning hostname "addons-620000"
	I0915 10:56:14.159128    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.159370    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.159383    2258 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-620000 && echo "addons-620000" | sudo tee /etc/hostname
	I0915 10:56:14.228426    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-620000
	
	I0915 10:56:14.228527    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.228715    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.228728    2258 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-620000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-620000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-620000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 10:56:14.287474    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 10:56:14.287488    2258 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1650/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1650/.minikube}
	I0915 10:56:14.287497    2258 buildroot.go:174] setting up certificates
	I0915 10:56:14.287502    2258 provision.go:84] configureAuth start
	I0915 10:56:14.287507    2258 provision.go:143] copyHostCerts
	I0915 10:56:14.287626    2258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem (1078 bytes)
	I0915 10:56:14.288467    2258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem (1123 bytes)
	I0915 10:56:14.288638    2258 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem (1679 bytes)
	I0915 10:56:14.288791    2258 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem org=jenkins.addons-620000 san=[127.0.0.1 192.168.105.2 addons-620000 localhost minikube]
	I0915 10:56:14.361140    2258 provision.go:177] copyRemoteCerts
	I0915 10:56:14.361197    2258 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 10:56:14.361205    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:14.390260    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 10:56:14.402954    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 10:56:14.411186    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 10:56:14.419476    2258 provision.go:87] duration metric: took 131.973625ms to configureAuth
	I0915 10:56:14.419485    2258 buildroot.go:189] setting minikube options for container-runtime
	I0915 10:56:14.419581    2258 config.go:182] Loaded profile config "addons-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 10:56:14.419622    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.419708    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.419713    2258 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 10:56:14.472487    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0915 10:56:14.472497    2258 buildroot.go:70] root file system type: tmpfs
	I0915 10:56:14.472546    2258 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 10:56:14.472596    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.472755    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.472789    2258 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 10:56:14.529569    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 10:56:14.529639    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:14.529739    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:14.529747    2258 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 10:56:15.884934    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0915 10:56:15.884946    2258 machine.go:96] duration metric: took 1.801413417s to provisionDockerMachine
	I0915 10:56:15.884953    2258 client.go:171] duration metric: took 18.501673583s to LocalClient.Create
	I0915 10:56:15.884968    2258 start.go:167] duration metric: took 18.501754042s to libmachine.API.Create "addons-620000"
	I0915 10:56:15.884973    2258 start.go:293] postStartSetup for "addons-620000" (driver="qemu2")
	I0915 10:56:15.884979    2258 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 10:56:15.885050    2258 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 10:56:15.885059    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:15.914449    2258 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 10:56:15.916117    2258 info.go:137] Remote host: Buildroot 2023.02.9
	I0915 10:56:15.916131    2258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/addons for local assets ...
	I0915 10:56:15.916226    2258 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/files for local assets ...
	I0915 10:56:15.916260    2258 start.go:296] duration metric: took 31.284875ms for postStartSetup
	I0915 10:56:15.916688    2258 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/config.json ...
	I0915 10:56:15.916869    2258 start.go:128] duration metric: took 18.776564083s to createHost
	I0915 10:56:15.916897    2258 main.go:141] libmachine: Using SSH client type: native
	I0915 10:56:15.916987    2258 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100e55190] 0x100e579d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0915 10:56:15.916991    2258 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 10:56:15.968100    2258 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726422975.985427336
	
	I0915 10:56:15.968107    2258 fix.go:216] guest clock: 1726422975.985427336
	I0915 10:56:15.968111    2258 fix.go:229] Guest: 2024-09-15 10:56:15.985427336 -0700 PDT Remote: 2024-09-15 10:56:15.916872 -0700 PDT m=+18.882134168 (delta=68.555336ms)
	I0915 10:56:15.968122    2258 fix.go:200] guest clock delta is within tolerance: 68.555336ms
	I0915 10:56:15.968124    2258 start.go:83] releasing machines lock for "addons-620000", held for 18.827859792s
	I0915 10:56:15.968403    2258 ssh_runner.go:195] Run: cat /version.json
	I0915 10:56:15.968413    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:15.968406    2258 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 10:56:15.968457    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:15.997245    2258 ssh_runner.go:195] Run: systemctl --version
	I0915 10:56:16.041122    2258 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 10:56:16.043293    2258 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 10:56:16.043329    2258 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 10:56:16.049687    2258 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 10:56:16.049697    2258 start.go:495] detecting cgroup driver to use...
	I0915 10:56:16.049853    2258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 10:56:16.056753    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 10:56:16.060338    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 10:56:16.063922    2258 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 10:56:16.063951    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 10:56:16.067376    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 10:56:16.071143    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 10:56:16.075102    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 10:56:16.079062    2258 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 10:56:16.082944    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 10:56:16.086896    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 10:56:16.090798    2258 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 10:56:16.094693    2258 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 10:56:16.098742    2258 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 10:56:16.102353    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:16.189183    2258 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 10:56:16.200302    2258 start.go:495] detecting cgroup driver to use...
	I0915 10:56:16.200392    2258 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 10:56:16.207118    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 10:56:16.216406    2258 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 10:56:16.222961    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 10:56:16.228469    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 10:56:16.233762    2258 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0915 10:56:16.270319    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 10:56:16.276465    2258 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 10:56:16.282877    2258 ssh_runner.go:195] Run: which cri-dockerd
	I0915 10:56:16.284317    2258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 10:56:16.287475    2258 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 10:56:16.293609    2258 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 10:56:16.359758    2258 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 10:56:16.431822    2258 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 10:56:16.431883    2258 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 10:56:16.437899    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:16.520521    2258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 10:56:18.703480    2258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.183018291s)
	I0915 10:56:18.703540    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 10:56:18.708916    2258 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0915 10:56:18.715454    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 10:56:18.720899    2258 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 10:56:18.791884    2258 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 10:56:18.858708    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:18.943338    2258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 10:56:18.949940    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 10:56:18.955617    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:19.039537    2258 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 10:56:19.075590    2258 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 10:56:19.075714    2258 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 10:56:19.077917    2258 start.go:563] Will wait 60s for crictl version
	I0915 10:56:19.077967    2258 ssh_runner.go:195] Run: which crictl
	I0915 10:56:19.079569    2258 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 10:56:19.096993    2258 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 10:56:19.097072    2258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 10:56:19.109542    2258 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 10:56:19.119584    2258 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 10:56:19.119730    2258 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0915 10:56:19.121097    2258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 10:56:19.125501    2258 kubeadm.go:883] updating cluster {Name:addons-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 10:56:19.125548    2258 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:56:19.125605    2258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 10:56:19.131188    2258 docker.go:685] Got preloaded images: 
	I0915 10:56:19.131198    2258 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0915 10:56:19.131247    2258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 10:56:19.134850    2258 ssh_runner.go:195] Run: which lz4
	I0915 10:56:19.136345    2258 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 10:56:19.137658    2258 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 10:56:19.137668    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0915 10:56:20.394503    2258 docker.go:649] duration metric: took 1.258254875s to copy over tarball
	I0915 10:56:20.394562    2258 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 10:56:21.367463    2258 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 10:56:21.382361    2258 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 10:56:21.386294    2258 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0915 10:56:21.392330    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:21.461583    2258 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 10:56:23.660758    2258 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.199234625s)
	I0915 10:56:23.660885    2258 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 10:56:23.667031    2258 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 10:56:23.667040    2258 cache_images.go:84] Images are preloaded, skipping loading
	I0915 10:56:23.667061    2258 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0915 10:56:23.667131    2258 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-620000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 10:56:23.667201    2258 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 10:56:23.689047    2258 cni.go:84] Creating CNI manager for ""
	I0915 10:56:23.689059    2258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 10:56:23.689065    2258 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 10:56:23.689075    2258 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-620000 NodeName:addons-620000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 10:56:23.689160    2258 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-620000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 10:56:23.689236    2258 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 10:56:23.693479    2258 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 10:56:23.693510    2258 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 10:56:23.697091    2258 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0915 10:56:23.702738    2258 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 10:56:23.708977    2258 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0915 10:56:23.714949    2258 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0915 10:56:23.716339    2258 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 10:56:23.720304    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:23.791060    2258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 10:56:23.804049    2258 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000 for IP: 192.168.105.2
	I0915 10:56:23.804071    2258 certs.go:194] generating shared ca certs ...
	I0915 10:56:23.804080    2258 certs.go:226] acquiring lock for ca certs: {Name:mkae14c7548e7e09ac75f5a854dc2935289ebc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:23.804279    2258 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key
	I0915 10:56:23.899175    2258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt ...
	I0915 10:56:23.899185    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt: {Name:mkaf0cf190b675e8d5be6b9d14da3b750d0e7a79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:23.899504    2258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key ...
	I0915 10:56:23.899509    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key: {Name:mk79cb69e635ee7058cd82bb8ffddc9b2f7a2eca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:23.899651    2258 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key
	I0915 10:56:24.007222    2258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt ...
	I0915 10:56:24.007226    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt: {Name:mk648bd74824b5cf2d2c7bdc696d5071ce4a9a94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.007382    2258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key ...
	I0915 10:56:24.007385    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key: {Name:mk3fc7f7ec8158b2cab8404421ef8799b03347f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.007512    2258 certs.go:256] generating profile certs ...
	I0915 10:56:24.007548    2258 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.key
	I0915 10:56:24.007557    2258 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt with IP's: []
	I0915 10:56:24.182916    2258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt ...
	I0915 10:56:24.182931    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: {Name:mkb71ab221d82f9b2376a18c1af1f295373e291a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.183430    2258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.key ...
	I0915 10:56:24.183435    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.key: {Name:mkab44acdba79969fa8d61ce76ebe34c66dde14b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.183572    2258 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key.602573ed
	I0915 10:56:24.183586    2258 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt.602573ed with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0915 10:56:24.261281    2258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt.602573ed ...
	I0915 10:56:24.261284    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt.602573ed: {Name:mkd9bee70cdb86b7b65a42299096940dfa8710a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.261423    2258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key.602573ed ...
	I0915 10:56:24.261427    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key.602573ed: {Name:mk8c0ddcfbf542ffb5defe2eeda9d6b53c14c1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.261541    2258 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt.602573ed -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt
	I0915 10:56:24.261636    2258 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key.602573ed -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key
	I0915 10:56:24.261740    2258 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.key
	I0915 10:56:24.261748    2258 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.crt with IP's: []
	I0915 10:56:24.330169    2258 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.crt ...
	I0915 10:56:24.330173    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.crt: {Name:mkd9f6dcff8f0ba1488ce6b063689923c9b47b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.330324    2258 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.key ...
	I0915 10:56:24.330327    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.key: {Name:mkb82a1cb772729be8e14766155cb2198835a5d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:24.330585    2258 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 10:56:24.330615    2258 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem (1078 bytes)
	I0915 10:56:24.330634    2258 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem (1123 bytes)
	I0915 10:56:24.330652    2258 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem (1679 bytes)
	I0915 10:56:24.331057    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 10:56:24.339997    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 10:56:24.347892    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 10:56:24.355817    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 10:56:24.363667    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 10:56:24.371547    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 10:56:24.379670    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 10:56:24.387626    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 10:56:24.395530    2258 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 10:56:24.403632    2258 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 10:56:24.410422    2258 ssh_runner.go:195] Run: openssl version
	I0915 10:56:24.412856    2258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 10:56:24.416457    2258 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 10:56:24.418181    2258 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0915 10:56:24.418201    2258 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 10:56:24.420501    2258 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 10:56:24.424058    2258 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 10:56:24.425533    2258 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 10:56:24.425575    2258 kubeadm.go:392] StartCluster: {Name:addons-620000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-620000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:56:24.425649    2258 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 10:56:24.430807    2258 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 10:56:24.439386    2258 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 10:56:24.443435    2258 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 10:56:24.447500    2258 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 10:56:24.447506    2258 kubeadm.go:157] found existing configuration files:
	
	I0915 10:56:24.447550    2258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 10:56:24.451274    2258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 10:56:24.451329    2258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 10:56:24.455078    2258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 10:56:24.458986    2258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 10:56:24.459026    2258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 10:56:24.462369    2258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 10:56:24.465861    2258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 10:56:24.465892    2258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 10:56:24.469317    2258 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 10:56:24.472443    2258 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 10:56:24.472471    2258 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 10:56:24.475512    2258 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 10:56:24.495901    2258 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 10:56:24.495929    2258 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 10:56:24.531046    2258 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 10:56:24.531112    2258 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 10:56:24.531164    2258 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 10:56:24.535726    2258 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 10:56:24.543864    2258 out.go:235]   - Generating certificates and keys ...
	I0915 10:56:24.543898    2258 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 10:56:24.543932    2258 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 10:56:24.597659    2258 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 10:56:24.679407    2258 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 10:56:24.741775    2258 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 10:56:24.816054    2258 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 10:56:24.868875    2258 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 10:56:24.868942    2258 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-620000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0915 10:56:24.944258    2258 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 10:56:24.944322    2258 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-620000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0915 10:56:25.015826    2258 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 10:56:25.273142    2258 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 10:56:25.344931    2258 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 10:56:25.344967    2258 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 10:56:25.412346    2258 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 10:56:25.487478    2258 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 10:56:25.561173    2258 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 10:56:25.738277    2258 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 10:56:25.831093    2258 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 10:56:25.831410    2258 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 10:56:25.832585    2258 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 10:56:25.839982    2258 out.go:235]   - Booting up control plane ...
	I0915 10:56:25.840033    2258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 10:56:25.840076    2258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 10:56:25.840107    2258 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 10:56:25.840579    2258 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 10:56:25.843280    2258 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 10:56:25.843310    2258 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 10:56:25.917326    2258 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 10:56:25.917391    2258 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 10:56:26.423250    2258 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 505.425001ms
	I0915 10:56:26.423458    2258 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 10:56:29.429504    2258 kubeadm.go:310] [api-check] The API server is healthy after 3.006173585s
	I0915 10:56:29.453231    2258 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 10:56:29.464859    2258 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 10:56:29.477544    2258 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 10:56:29.477702    2258 kubeadm.go:310] [mark-control-plane] Marking the node addons-620000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 10:56:29.482751    2258 kubeadm.go:310] [bootstrap-token] Using token: 8cwcha.29jqum9yx58f0e5x
	I0915 10:56:29.488573    2258 out.go:235]   - Configuring RBAC rules ...
	I0915 10:56:29.488668    2258 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 10:56:29.490412    2258 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 10:56:29.497950    2258 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 10:56:29.499157    2258 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 10:56:29.500429    2258 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 10:56:29.501629    2258 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 10:56:29.840730    2258 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 10:56:30.249240    2258 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 10:56:30.838816    2258 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 10:56:30.839936    2258 kubeadm.go:310] 
	I0915 10:56:30.840090    2258 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 10:56:30.840100    2258 kubeadm.go:310] 
	I0915 10:56:30.840234    2258 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 10:56:30.840246    2258 kubeadm.go:310] 
	I0915 10:56:30.840269    2258 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 10:56:30.840335    2258 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 10:56:30.840381    2258 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 10:56:30.840391    2258 kubeadm.go:310] 
	I0915 10:56:30.840444    2258 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 10:56:30.840450    2258 kubeadm.go:310] 
	I0915 10:56:30.840506    2258 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 10:56:30.840519    2258 kubeadm.go:310] 
	I0915 10:56:30.840593    2258 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 10:56:30.840672    2258 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 10:56:30.840745    2258 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 10:56:30.840755    2258 kubeadm.go:310] 
	I0915 10:56:30.840859    2258 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 10:56:30.840943    2258 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 10:56:30.840999    2258 kubeadm.go:310] 
	I0915 10:56:30.841084    2258 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8cwcha.29jqum9yx58f0e5x \
	I0915 10:56:30.841279    2258 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd \
	I0915 10:56:30.841311    2258 kubeadm.go:310] 	--control-plane 
	I0915 10:56:30.841316    2258 kubeadm.go:310] 
	I0915 10:56:30.841418    2258 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 10:56:30.841430    2258 kubeadm.go:310] 
	I0915 10:56:30.841544    2258 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8cwcha.29jqum9yx58f0e5x \
	I0915 10:56:30.841715    2258 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd 
	I0915 10:56:30.842091    2258 kubeadm.go:310] W0915 17:56:24.512134    1581 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 10:56:30.842461    2258 kubeadm.go:310] W0915 17:56:24.512564    1581 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 10:56:30.842608    2258 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 10:56:30.842629    2258 cni.go:84] Creating CNI manager for ""
	I0915 10:56:30.842644    2258 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 10:56:30.850975    2258 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 10:56:30.855102    2258 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 10:56:30.862867    2258 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 10:56:30.872695    2258 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 10:56:30.872770    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:30.872824    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-620000 minikube.k8s.io/updated_at=2024_09_15T10_56_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=addons-620000 minikube.k8s.io/primary=true
	I0915 10:56:30.884772    2258 ops.go:34] apiserver oom_adj: -16
	I0915 10:56:30.934700    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:31.436313    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:31.936818    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:32.436804    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:32.934907    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:33.435715    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:33.936764    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:34.436711    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:34.936766    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:35.436225    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:35.936597    2258 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 10:56:35.971602    2258 kubeadm.go:1113] duration metric: took 5.099067167s to wait for elevateKubeSystemPrivileges
	I0915 10:56:35.971618    2258 kubeadm.go:394] duration metric: took 11.5464515s to StartCluster
	I0915 10:56:35.971629    2258 settings.go:142] acquiring lock: {Name:mke41fab1fd2ef0229fde23400affd11462eeb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:35.971805    2258 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:56:35.971997    2258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:56:35.972260    2258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 10:56:35.972270    2258 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 10:56:35.972303    2258 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 10:56:35.972344    2258 addons.go:69] Setting gcp-auth=true in profile "addons-620000"
	I0915 10:56:35.972345    2258 addons.go:69] Setting yakd=true in profile "addons-620000"
	I0915 10:56:35.972360    2258 mustload.go:65] Loading cluster: addons-620000
	I0915 10:56:35.972363    2258 addons.go:234] Setting addon yakd=true in "addons-620000"
	I0915 10:56:35.972371    2258 config.go:182] Loaded profile config "addons-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 10:56:35.972376    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972398    2258 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-620000"
	I0915 10:56:35.972403    2258 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-620000"
	I0915 10:56:35.972411    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972443    2258 config.go:182] Loaded profile config "addons-620000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 10:56:35.972453    2258 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-620000"
	I0915 10:56:35.972453    2258 addons.go:69] Setting registry=true in profile "addons-620000"
	I0915 10:56:35.972471    2258 addons.go:69] Setting default-storageclass=true in profile "addons-620000"
	I0915 10:56:35.972475    2258 addons.go:69] Setting storage-provisioner=true in profile "addons-620000"
	I0915 10:56:35.972482    2258 addons.go:234] Setting addon registry=true in "addons-620000"
	I0915 10:56:35.972487    2258 addons.go:69] Setting volumesnapshots=true in profile "addons-620000"
	I0915 10:56:35.972491    2258 addons.go:234] Setting addon storage-provisioner=true in "addons-620000"
	I0915 10:56:35.972511    2258 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-620000"
	I0915 10:56:35.972523    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972531    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972539    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972484    2258 addons.go:69] Setting volcano=true in profile "addons-620000"
	I0915 10:56:35.972573    2258 addons.go:234] Setting addon volcano=true in "addons-620000"
	I0915 10:56:35.972599    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972477    2258 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-620000"
	I0915 10:56:35.972678    2258 retry.go:31] will retry after 1.460422611s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972768    2258 retry.go:31] will retry after 524.894711ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972493    2258 addons.go:234] Setting addon volumesnapshots=true in "addons-620000"
	I0915 10:56:35.972779    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.972875    2258 retry.go:31] will retry after 1.427889468s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972465    2258 addons.go:69] Setting ingress-dns=true in profile "addons-620000"
	I0915 10:56:35.972982    2258 addons.go:234] Setting addon ingress-dns=true in "addons-620000"
	I0915 10:56:35.972988    2258 retry.go:31] will retry after 661.873093ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972985    2258 retry.go:31] will retry after 1.082245569s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972450    2258 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-620000"
	I0915 10:56:35.973009    2258 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-620000"
	I0915 10:56:35.972480    2258 addons.go:69] Setting cloud-spanner=true in profile "addons-620000"
	I0915 10:56:35.973049    2258 retry.go:31] will retry after 1.229417721s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.973052    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973069    2258 addons.go:234] Setting addon cloud-spanner=true in "addons-620000"
	I0915 10:56:35.972469    2258 addons.go:69] Setting metrics-server=true in profile "addons-620000"
	I0915 10:56:35.973077    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973080    2258 addons.go:234] Setting addon metrics-server=true in "addons-620000"
	I0915 10:56:35.973088    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973173    2258 retry.go:31] will retry after 930.465155ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972463    2258 addons.go:69] Setting ingress=true in profile "addons-620000"
	I0915 10:56:35.973200    2258 addons.go:234] Setting addon ingress=true in "addons-620000"
	I0915 10:56:35.973209    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973212    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973372    2258 retry.go:31] will retry after 506.006029ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.973390    2258 retry.go:31] will retry after 1.329576925s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.972467    2258 addons.go:69] Setting inspektor-gadget=true in profile "addons-620000"
	I0915 10:56:35.973398    2258 addons.go:234] Setting addon inspektor-gadget=true in "addons-620000"
	I0915 10:56:35.973407    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:35.973460    2258 retry.go:31] will retry after 817.961082ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.973495    2258 retry.go:31] will retry after 1.230292107s: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.973624    2258 retry.go:31] will retry after 671.022419ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/monitor: connect: connection refused
	I0915 10:56:35.977039    2258 out.go:177] * Verifying Kubernetes components...
	I0915 10:56:35.986956    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 10:56:35.986956    2258 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 10:56:35.990933    2258 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 10:56:35.997050    2258 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 10:56:35.997057    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 10:56:35.997067    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:35.999956    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 10:56:36.007045    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 10:56:36.011007    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 10:56:36.014983    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 10:56:36.018974    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 10:56:36.022916    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 10:56:36.030817    2258 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 10:56:36.032797    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 10:56:36.036939    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 10:56:36.036944    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 10:56:36.036956    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.104648    2258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 10:56:36.127140    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 10:56:36.205418    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 10:56:36.205431    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 10:56:36.295526    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 10:56:36.295540    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 10:56:36.383857    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 10:56:36.383869    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 10:56:36.418406    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 10:56:36.418419    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 10:56:36.433840    2258 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0915 10:56:36.434144    2258 node_ready.go:35] waiting up to 6m0s for node "addons-620000" to be "Ready" ...
	I0915 10:56:36.439627    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 10:56:36.439646    2258 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 10:56:36.445721    2258 node_ready.go:49] node "addons-620000" has status "Ready":"True"
	I0915 10:56:36.445741    2258 node_ready.go:38] duration metric: took 11.576542ms for node "addons-620000" to be "Ready" ...
	I0915 10:56:36.445746    2258 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 10:56:36.453851    2258 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace to be "Ready" ...
	I0915 10:56:36.482346    2258 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-620000"
	I0915 10:56:36.482372    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:36.487376    2258 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 10:56:36.491349    2258 out.go:177]   - Using image docker.io/busybox:stable
	I0915 10:56:36.495330    2258 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 10:56:36.495346    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 10:56:36.495358    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.495656    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 10:56:36.495661    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 10:56:36.502326    2258 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 10:56:36.508324    2258 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 10:56:36.508334    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 10:56:36.508346    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.541143    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 10:56:36.552171    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 10:56:36.552185    2258 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 10:56:36.575115    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 10:56:36.587601    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 10:56:36.587612    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 10:56:36.641342    2258 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 10:56:36.645347    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 10:56:36.645358    2258 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 10:56:36.645370    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.645688    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 10:56:36.645694    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 10:56:36.650312    2258 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 10:56:36.653239    2258 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 10:56:36.653250    2258 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 10:56:36.653262    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.681771    2258 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 10:56:36.681784    2258 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 10:56:36.699302    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 10:56:36.717269    2258 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 10:56:36.717279    2258 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 10:56:36.723812    2258 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 10:56:36.723823    2258 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 10:56:36.734114    2258 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 10:56:36.734127    2258 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 10:56:36.737647    2258 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 10:56:36.737656    2258 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 10:56:36.754397    2258 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 10:56:36.754409    2258 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 10:56:36.765737    2258 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 10:56:36.765754    2258 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 10:56:36.797420    2258 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 10:56:36.801394    2258 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 10:56:36.805398    2258 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 10:56:36.809424    2258 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 10:56:36.809435    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 10:56:36.809445    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.809598    2258 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 10:56:36.809627    2258 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 10:56:36.809602    2258 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 10:56:36.809669    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 10:56:36.832192    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 10:56:36.840994    2258 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 10:56:36.841003    2258 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 10:56:36.879546    2258 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 10:56:36.879561    2258 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 10:56:36.909458    2258 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 10:56:36.913328    2258 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 10:56:36.917383    2258 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 10:56:36.917392    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 10:56:36.917402    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:36.925937    2258 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 10:56:36.925949    2258 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 10:56:36.940245    2258 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-620000" context rescaled to 1 replicas
	I0915 10:56:36.947124    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 10:56:36.986168    2258 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 10:56:36.986177    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 10:56:37.035534    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 10:56:37.056180    2258 addons.go:234] Setting addon default-storageclass=true in "addons-620000"
	I0915 10:56:37.056200    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:37.056804    2258 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 10:56:37.056811    2258 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 10:56:37.056817    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.079178    2258 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 10:56:37.079192    2258 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 10:56:37.147119    2258 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 10:56:37.147129    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 10:56:37.185295    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 10:56:37.207741    2258 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 10:56:37.211792    2258 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 10:56:37.215738    2258 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 10:56:37.219784    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 10:56:37.220093    2258 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 10:56:37.220100    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 10:56:37.220108    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.223698    2258 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 10:56:37.227666    2258 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 10:56:37.227675    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 10:56:37.227685    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.308765    2258 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 10:56:37.311738    2258 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 10:56:37.311746    2258 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 10:56:37.311756    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.405698    2258 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 10:56:37.408805    2258 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 10:56:37.408812    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 10:56:37.408823    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.437750    2258 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 10:56:37.440639    2258 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 10:56:37.440647    2258 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 10:56:37.440658    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:37.488754    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 10:56:37.557889    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 10:56:37.576934    2258 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 10:56:37.576945    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 10:56:37.726157    2258 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 10:56:37.726171    2258 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 10:56:37.756408    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 10:56:37.844025    2258 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 10:56:37.844042    2258 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 10:56:37.882938    2258 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 10:56:37.882951    2258 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 10:56:37.942631    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 10:56:37.982295    2258 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 10:56:37.982309    2258 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 10:56:38.111026    2258 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 10:56:38.111039    2258 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 10:56:38.191264    2258 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 10:56:38.191275    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 10:56:38.243292    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 10:56:38.478127    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:40.004611    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.17251175s)
	W0915 10:56:40.004636    2258 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 10:56:40.004651    2258 retry.go:31] will retry after 372.640654ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 10:56:40.004708    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.057680875s)
	I0915 10:56:40.004739    2258 addons.go:475] Verifying addon ingress=true in "addons-620000"
	I0915 10:56:40.004709    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.305505833s)
	I0915 10:56:40.004766    2258 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-620000"
	I0915 10:56:40.004893    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.9694495s)
	I0915 10:56:40.004916    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.819705583s)
	I0915 10:56:40.005417    2258 addons.go:475] Verifying addon registry=true in "addons-620000"
	I0915 10:56:40.004931    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.785236625s)
	I0915 10:56:40.008884    2258 out.go:177] * Verifying ingress addon...
	I0915 10:56:40.017819    2258 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 10:56:40.024867    2258 out.go:177] * Verifying registry addon...
	I0915 10:56:40.032199    2258 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 10:56:40.035224    2258 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 10:56:40.038103    2258 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 10:56:40.050489    2258 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 10:56:40.050498    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:40.050638    2258 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 10:56:40.050645    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:40.050784    2258 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 10:56:40.050790    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:40.377677    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 10:56:40.526257    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:40.562033    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:40.562195    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:40.562231    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:41.034906    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:41.149811    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:41.150092    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:41.211768    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.723125958s)
	I0915 10:56:41.211784    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.654011584s)
	I0915 10:56:41.211796    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.455500959s)
	I0915 10:56:41.211838    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.269311666s)
	I0915 10:56:41.211846    2258 addons.go:475] Verifying addon metrics-server=true in "addons-620000"
	I0915 10:56:41.211867    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.968671084s)
	I0915 10:56:41.216795    2258 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-620000 service yakd-dashboard -n yakd-dashboard
	
	I0915 10:56:41.382341    2258 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.004681042s)
	I0915 10:56:41.536584    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:41.537719    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:41.539838    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:42.036459    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:42.037604    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:42.040843    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:42.536622    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:42.538081    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:42.539969    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:42.959547    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:42.981080    2258 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 10:56:42.981095    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:43.012907    2258 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 10:56:43.019502    2258 addons.go:234] Setting addon gcp-auth=true in "addons-620000"
	I0915 10:56:43.019527    2258 host.go:66] Checking if "addons-620000" exists ...
	I0915 10:56:43.020264    2258 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 10:56:43.020271    2258 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/addons-620000/id_rsa Username:docker}
	I0915 10:56:43.036028    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:43.037758    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:43.040402    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:43.052258    2258 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 10:56:43.055115    2258 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 10:56:43.059139    2258 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 10:56:43.059145    2258 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 10:56:43.065931    2258 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 10:56:43.065939    2258 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 10:56:43.073656    2258 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 10:56:43.073664    2258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 10:56:43.079500    2258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 10:56:43.381246    2258 addons.go:475] Verifying addon gcp-auth=true in "addons-620000"
	I0915 10:56:43.384630    2258 out.go:177] * Verifying gcp-auth addon...
	I0915 10:56:43.395012    2258 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 10:56:43.396233    2258 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 10:56:43.537495    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:43.538618    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:43.540529    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:44.038730    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:44.138383    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:44.138614    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:44.538186    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:44.539412    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:44.541350    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:44.961257    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:45.040485    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:45.042623    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:45.042902    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:45.539650    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:45.540946    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:45.542632    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:46.038517    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:46.039816    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:46.042445    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:46.463542    2258 pod_ready.go:98] pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:46 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.105.2 HostIPs:[{IP:192.168.105
.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-15 10:56:35 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-15 10:56:36 -0700 PDT,FinishedAt:2024-09-15 10:56:46 -0700 PDT,ContainerID:docker://bea5aa5d8f1743e08147fa9a52c9aab63cac461b50e4796d9a5c28a78273dcc3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bea5aa5d8f1743e08147fa9a52c9aab63cac461b50e4796d9a5c28a78273dcc3 Started:0x140013cadd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14001ec11d0} {Name:kube-api-access-sntrh MountPath:/var/run/secrets/kubernetes.io/serviceacc
ount ReadOnly:true RecursiveReadOnly:0x14001ec11e0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0915 10:56:46.463556    2258 pod_ready.go:82] duration metric: took 10.007096584s for pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace to be "Ready" ...
	E0915 10:56:46.463560    2258 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-gzgv4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:46 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-15 10:56:35 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.1
05.2 HostIPs:[{IP:192.168.105.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-15 10:56:35 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-15 10:56:36 -0700 PDT,FinishedAt:2024-09-15 10:56:46 -0700 PDT,ContainerID:docker://bea5aa5d8f1743e08147fa9a52c9aab63cac461b50e4796d9a5c28a78273dcc3,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://bea5aa5d8f1743e08147fa9a52c9aab63cac461b50e4796d9a5c28a78273dcc3 Started:0x140013cadd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x14001ec11d0} {Name:kube-api-access-sntrh MountPath:/var/run/sec
rets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x14001ec11e0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0915 10:56:46.463565    2258 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace to be "Ready" ...
	I0915 10:56:46.539290    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:46.540419    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:46.542358    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:47.038901    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:47.040239    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:47.042922    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:47.539611    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:47.540642    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:47.542969    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:48.039783    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:48.040728    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:48.043104    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:48.470563    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:48.539284    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:48.541055    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:48.543324    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:49.040277    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:49.041616    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:49.043603    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:49.540382    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:49.541552    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:49.543655    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:50.040937    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:50.041971    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:50.043912    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:50.541166    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:50.542212    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:50.544006    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:50.970056    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:51.040746    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:51.042854    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:51.044135    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:51.541136    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:51.542077    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:51.544434    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:52.041127    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:52.042396    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:52.044663    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:52.540205    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:52.542300    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:52.544834    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:52.970795    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:53.041498    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:53.042595    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:53.044864    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:53.541435    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:53.542573    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:53.545159    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:54.043042    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:54.044106    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:54.045577    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:54.542456    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:54.543516    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:54.545277    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:54.971590    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:55.042455    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:55.043850    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:55.045608    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:55.542525    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:55.544074    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:55.546852    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:56.042495    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:56.043683    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:56.045680    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:56.542668    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:56.545560    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:56.546363    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:57.043957    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:57.044623    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:57.045903    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:57.474268    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:56:57.542722    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:57.544007    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:57.546524    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:58.042969    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:58.044393    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:58.046178    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:58.543317    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:58.545301    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:58.546814    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:59.046552    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:59.046906    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:59.048111    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:59.543069    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:56:59.544289    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:56:59.546526    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:56:59.978256    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:00.044579    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:00.045352    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:00.047173    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:00.543466    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:00.544576    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:00.547800    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:01.043653    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:01.044793    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:01.046902    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:01.543817    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:01.544728    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:01.547005    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:02.044211    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:02.045206    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:02.047058    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:02.475141    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:02.558073    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:02.558161    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:02.559008    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:03.043196    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:03.047162    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:03.048605    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:03.544407    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:03.546158    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:03.547445    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:04.044489    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:04.045940    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:04.047530    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:04.544154    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:04.545508    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:04.547512    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:04.972859    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:05.044285    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:05.045256    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:05.047438    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:05.544252    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:05.545398    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:05.547628    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:06.044437    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:06.045490    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:06.047437    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:06.544568    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:06.545708    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:06.547691    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:06.973632    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:07.045186    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:07.046534    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:07.047881    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 10:57:07.544704    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:07.545690    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:07.548140    2258 kapi.go:107] duration metric: took 27.501813625s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 10:57:08.044355    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:08.045709    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:08.545194    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:08.546365    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:08.975099    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:09.044582    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:09.045615    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:09.544561    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:09.546213    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:10.045054    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:10.046088    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:10.544640    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:10.545870    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:11.044818    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:11.046049    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:11.474362    2258 pod_ready.go:103] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"False"
	I0915 10:57:11.544668    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:11.545944    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:11.974593    2258 pod_ready.go:93] pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:11.974602    2258 pod_ready.go:82] duration metric: took 25.505142209s for pod "coredns-7c65d6cfc9-jjr57" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.974606    2258 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.976513    2258 pod_ready.go:93] pod "etcd-addons-620000" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:11.976518    2258 pod_ready.go:82] duration metric: took 1.908792ms for pod "etcd-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.976522    2258 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.978425    2258 pod_ready.go:93] pod "kube-apiserver-addons-620000" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:11.978430    2258 pod_ready.go:82] duration metric: took 1.904709ms for pod "kube-apiserver-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.978433    2258 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.980496    2258 pod_ready.go:93] pod "kube-controller-manager-addons-620000" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:11.980502    2258 pod_ready.go:82] duration metric: took 2.065292ms for pod "kube-controller-manager-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.980509    2258 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d5j2x" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.982386    2258 pod_ready.go:93] pod "kube-proxy-d5j2x" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:11.982390    2258 pod_ready.go:82] duration metric: took 1.878417ms for pod "kube-proxy-d5j2x" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:11.982393    2258 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:12.044458    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:12.045593    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:12.374258    2258 pod_ready.go:93] pod "kube-scheduler-addons-620000" in "kube-system" namespace has status "Ready":"True"
	I0915 10:57:12.374270    2258 pod_ready.go:82] duration metric: took 391.846291ms for pod "kube-scheduler-addons-620000" in "kube-system" namespace to be "Ready" ...
	I0915 10:57:12.374276    2258 pod_ready.go:39] duration metric: took 35.920011709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 10:57:12.374289    2258 api_server.go:52] waiting for apiserver process to appear ...
	I0915 10:57:12.374387    2258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 10:57:12.382575    2258 api_server.go:72] duration metric: took 36.401796292s to wait for apiserver process to appear ...
	I0915 10:57:12.382587    2258 api_server.go:88] waiting for apiserver healthz status ...
	I0915 10:57:12.382596    2258 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0915 10:57:12.385491    2258 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0915 10:57:12.386045    2258 api_server.go:141] control plane version: v1.31.1
	I0915 10:57:12.386052    2258 api_server.go:131] duration metric: took 3.4615ms to wait for apiserver health ...
	I0915 10:57:12.386055    2258 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 10:57:12.545072    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:12.546272    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:12.577963    2258 system_pods.go:59] 17 kube-system pods found
	I0915 10:57:12.577971    2258 system_pods.go:61] "coredns-7c65d6cfc9-jjr57" [b98932b2-c959-4377-ab4d-2dc3bee992dc] Running
	I0915 10:57:12.577975    2258 system_pods.go:61] "csi-hostpath-attacher-0" [dca6eaff-f7b0-4be9-9be7-08e855001680] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 10:57:12.577979    2258 system_pods.go:61] "csi-hostpath-resizer-0" [5617e0cd-8032-4214-a6f2-17494b75843d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 10:57:12.577982    2258 system_pods.go:61] "csi-hostpathplugin-4m5rf" [bc09a6a1-f884-483b-ba05-b59f0369be92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 10:57:12.577985    2258 system_pods.go:61] "etcd-addons-620000" [dde00dce-7d39-4955-b0ba-b07216647aff] Running
	I0915 10:57:12.577988    2258 system_pods.go:61] "kube-apiserver-addons-620000" [fc224943-520e-412d-bc28-191e66fa9d3d] Running
	I0915 10:57:12.577990    2258 system_pods.go:61] "kube-controller-manager-addons-620000" [577f47d2-425b-46fe-baf3-518f1934c32d] Running
	I0915 10:57:12.577992    2258 system_pods.go:61] "kube-ingress-dns-minikube" [fc024b80-c66c-42e7-9a72-ddaa99f1fa64] Running
	I0915 10:57:12.577994    2258 system_pods.go:61] "kube-proxy-d5j2x" [9948838d-ab28-47e5-888b-bd22bda0300a] Running
	I0915 10:57:12.577996    2258 system_pods.go:61] "kube-scheduler-addons-620000" [6b4dc84f-f792-4d1d-8e81-ae1a17f8a1f1] Running
	I0915 10:57:12.577997    2258 system_pods.go:61] "metrics-server-84c5f94fbc-h9hbf" [bad8f796-8cfd-42f6-a05a-6dea9f543306] Running
	I0915 10:57:12.577999    2258 system_pods.go:61] "nvidia-device-plugin-daemonset-xbbmc" [d25e5994-a4f9-4ec0-b2c7-60b234f58eea] Running
	I0915 10:57:12.578000    2258 system_pods.go:61] "registry-66c9cd494c-66pdh" [95e9f23d-5878-4962-aaec-4a917383b9a2] Running
	I0915 10:57:12.578002    2258 system_pods.go:61] "registry-proxy-7jd6c" [07c364fc-3808-4c42-a919-efc8d8fd3ddc] Running
	I0915 10:57:12.578004    2258 system_pods.go:61] "snapshot-controller-56fcc65765-mmb95" [53755013-c86f-435d-b854-3ce20650ac6c] Running
	I0915 10:57:12.578007    2258 system_pods.go:61] "snapshot-controller-56fcc65765-vznwh" [a30e3962-4827-46ac-9557-3647c18f7960] Running
	I0915 10:57:12.578009    2258 system_pods.go:61] "storage-provisioner" [cd252d9c-8ab2-4b53-9b50-7fe6200ed09d] Running
	I0915 10:57:12.578013    2258 system_pods.go:74] duration metric: took 191.941375ms to wait for pod list to return data ...
	I0915 10:57:12.578016    2258 default_sa.go:34] waiting for default service account to be created ...
	I0915 10:57:12.774788    2258 default_sa.go:45] found service account: "default"
	I0915 10:57:12.774800    2258 default_sa.go:55] duration metric: took 196.767708ms for default service account to be created ...
	I0915 10:57:12.774804    2258 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 10:57:12.977699    2258 system_pods.go:86] 17 kube-system pods found
	I0915 10:57:12.977710    2258 system_pods.go:89] "coredns-7c65d6cfc9-jjr57" [b98932b2-c959-4377-ab4d-2dc3bee992dc] Running
	I0915 10:57:12.977715    2258 system_pods.go:89] "csi-hostpath-attacher-0" [dca6eaff-f7b0-4be9-9be7-08e855001680] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 10:57:12.977718    2258 system_pods.go:89] "csi-hostpath-resizer-0" [5617e0cd-8032-4214-a6f2-17494b75843d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 10:57:12.977721    2258 system_pods.go:89] "csi-hostpathplugin-4m5rf" [bc09a6a1-f884-483b-ba05-b59f0369be92] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 10:57:12.977723    2258 system_pods.go:89] "etcd-addons-620000" [dde00dce-7d39-4955-b0ba-b07216647aff] Running
	I0915 10:57:12.977725    2258 system_pods.go:89] "kube-apiserver-addons-620000" [fc224943-520e-412d-bc28-191e66fa9d3d] Running
	I0915 10:57:12.977727    2258 system_pods.go:89] "kube-controller-manager-addons-620000" [577f47d2-425b-46fe-baf3-518f1934c32d] Running
	I0915 10:57:12.977729    2258 system_pods.go:89] "kube-ingress-dns-minikube" [fc024b80-c66c-42e7-9a72-ddaa99f1fa64] Running
	I0915 10:57:12.977731    2258 system_pods.go:89] "kube-proxy-d5j2x" [9948838d-ab28-47e5-888b-bd22bda0300a] Running
	I0915 10:57:12.977732    2258 system_pods.go:89] "kube-scheduler-addons-620000" [6b4dc84f-f792-4d1d-8e81-ae1a17f8a1f1] Running
	I0915 10:57:12.977734    2258 system_pods.go:89] "metrics-server-84c5f94fbc-h9hbf" [bad8f796-8cfd-42f6-a05a-6dea9f543306] Running
	I0915 10:57:12.977736    2258 system_pods.go:89] "nvidia-device-plugin-daemonset-xbbmc" [d25e5994-a4f9-4ec0-b2c7-60b234f58eea] Running
	I0915 10:57:12.977737    2258 system_pods.go:89] "registry-66c9cd494c-66pdh" [95e9f23d-5878-4962-aaec-4a917383b9a2] Running
	I0915 10:57:12.977739    2258 system_pods.go:89] "registry-proxy-7jd6c" [07c364fc-3808-4c42-a919-efc8d8fd3ddc] Running
	I0915 10:57:12.977741    2258 system_pods.go:89] "snapshot-controller-56fcc65765-mmb95" [53755013-c86f-435d-b854-3ce20650ac6c] Running
	I0915 10:57:12.977743    2258 system_pods.go:89] "snapshot-controller-56fcc65765-vznwh" [a30e3962-4827-46ac-9557-3647c18f7960] Running
	I0915 10:57:12.977745    2258 system_pods.go:89] "storage-provisioner" [cd252d9c-8ab2-4b53-9b50-7fe6200ed09d] Running
	I0915 10:57:12.977748    2258 system_pods.go:126] duration metric: took 202.929ms to wait for k8s-apps to be running ...
	I0915 10:57:12.977754    2258 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 10:57:12.977833    2258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 10:57:12.985097    2258 system_svc.go:56] duration metric: took 7.342125ms WaitForService to wait for kubelet
	I0915 10:57:12.985107    2258 kubeadm.go:582] duration metric: took 37.004290375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 10:57:12.985116    2258 node_conditions.go:102] verifying NodePressure condition ...
	I0915 10:57:13.043516    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:13.046227    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:13.174859    2258 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0915 10:57:13.174870    2258 node_conditions.go:123] node cpu capacity is 2
	I0915 10:57:13.174877    2258 node_conditions.go:105] duration metric: took 189.745334ms to run NodePressure ...
	I0915 10:57:13.174883    2258 start.go:241] waiting for startup goroutines ...
	I0915 10:57:13.544893    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:13.545878    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:14.046023    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:14.052069    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:14.545492    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:14.546596    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:15.046174    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:15.048542    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:15.545438    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:15.546672    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:16.045105    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:16.046886    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:16.545002    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:16.546149    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:17.045393    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:17.046156    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:17.545249    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:17.546235    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:18.045186    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:18.046668    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:18.545509    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:18.546623    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:19.045863    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:19.047000    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:19.545196    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:19.546514    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:20.045591    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:20.046656    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:20.545227    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:20.546222    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:21.045479    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:21.046415    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:21.546567    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:21.546679    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:22.045517    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:22.046516    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:22.546970    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:22.547110    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:23.045530    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:23.046960    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:23.545623    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:23.546790    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:24.045438    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:24.045886    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:24.545803    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:24.546594    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:25.049826    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:25.050851    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:25.546956    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:25.549446    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:26.045986    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:26.047050    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:26.545537    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:26.546877    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:27.045775    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:27.047166    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:27.548448    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:27.550729    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:28.045726    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:28.047027    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:28.545390    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:28.546412    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:29.045331    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:29.046434    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:29.545747    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:29.546994    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:30.045980    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:30.047481    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:30.546753    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:30.549005    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:31.045648    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:31.046614    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:31.545526    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:31.546564    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:32.045720    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:32.047703    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:32.546004    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:32.546943    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:33.045792    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:33.046689    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:33.546300    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:33.548732    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:34.045556    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:34.046514    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:34.545748    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:34.547085    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:35.045578    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:35.047051    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:35.545345    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:35.546382    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:36.045595    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:36.046818    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:36.546264    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:36.548218    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:37.045475    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:37.046516    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:37.545592    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:37.549953    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:38.044094    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:38.046459    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:38.553292    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:38.555105    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:39.045552    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:39.046675    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:39.545130    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:39.546013    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:40.044214    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:40.046394    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:40.551671    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 10:57:40.551737    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:41.046662    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:41.048173    2258 kapi.go:107] duration metric: took 1m1.00391775s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 10:57:41.552192    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:42.047967    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:42.553940    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:43.052019    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:43.545295    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:44.045849    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:44.544335    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:45.045241    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:45.545994    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:46.045161    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:46.545691    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:47.045082    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:47.545844    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:48.045256    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:48.545333    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:49.045389    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:49.545256    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:50.045430    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:50.545341    2258 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 10:57:51.045393    2258 kapi.go:107] duration metric: took 1m11.00436825s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 10:58:05.406401    2258 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 10:58:05.406413    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:05.907404    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:06.407521    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:06.908633    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:07.406338    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:07.910913    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:08.408493    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:08.912152    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:09.411421    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:09.910990    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:10.406371    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:10.911617    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:11.413507    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:11.907143    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:12.410826    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:12.909975    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:13.407951    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:13.911370    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:14.412036    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:14.911725    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:15.407814    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:15.912314    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:16.408400    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:16.910046    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:17.408013    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:17.911990    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:18.424932    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:18.910881    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:19.414178    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:19.907529    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:20.405933    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:20.911102    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:21.407577    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:21.906409    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:22.407209    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:22.906543    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:23.410467    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:23.907825    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:24.415708    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:24.907897    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:25.411252    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:25.909482    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:26.407075    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:26.905992    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:27.411436    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:27.909932    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:28.410450    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:28.912145    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:29.410743    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:29.907803    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:30.405003    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:30.912219    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:31.412380    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:31.907415    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:32.411922    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:32.911041    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:33.411385    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:33.909738    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:34.406164    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:34.909288    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:35.407486    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:35.907346    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:36.410327    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:36.907499    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:37.410392    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:37.906041    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:38.410713    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:38.906849    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:39.406686    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:39.907193    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:40.405224    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:40.906176    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:41.411029    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:41.912255    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:42.408038    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:42.909093    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:43.411348    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:43.908590    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:44.407589    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:44.905725    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:45.413494    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:45.908004    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:46.405369    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:46.905162    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:47.407012    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:47.907077    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:48.406572    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:48.909944    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:49.404014    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:49.906745    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:50.406481    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:50.909982    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:51.411057    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:51.909861    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:52.407473    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:52.909501    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:53.406555    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:53.909443    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:54.406868    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:54.906132    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:55.410495    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:55.912403    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:56.406358    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:56.905606    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:57.408379    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:57.905675    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:58.406227    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:58.909151    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:59.409110    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:58:59.909789    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:00.403647    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:00.904836    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:01.406550    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:01.910299    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:02.404565    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:02.909360    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:03.406611    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:03.905384    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:04.405904    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:04.905272    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:05.412572    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:05.910571    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:06.410183    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:06.908007    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:07.408437    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:07.910110    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:08.410649    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:08.908907    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:09.409751    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:09.904201    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:10.409038    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:10.904339    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:11.403963    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:11.905809    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:12.403916    2258 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 10:59:12.904231    2258 kapi.go:107] duration metric: took 2m29.503894208s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 10:59:12.909262    2258 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-620000 cluster.
	I0915 10:59:12.914268    2258 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 10:59:12.918165    2258 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 10:59:12.922334    2258 out.go:177] * Enabled addons: ingress-dns, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, default-storageclass, volcano, cloud-spanner, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0915 10:59:12.926218    2258 addons.go:510] duration metric: took 2m36.947761083s for enable addons: enabled=[ingress-dns storage-provisioner-rancher storage-provisioner inspektor-gadget default-storageclass volcano cloud-spanner nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0915 10:59:12.926234    2258 start.go:246] waiting for cluster config update ...
	I0915 10:59:12.926256    2258 start.go:255] writing updated cluster config ...
	I0915 10:59:12.926729    2258 ssh_runner.go:195] Run: rm -f paused
	I0915 10:59:13.085557    2258 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0915 10:59:13.089811    2258 out.go:201] 
	W0915 10:59:13.093201    2258 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0915 10:59:13.097268    2258 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0915 10:59:13.105209    2258 out.go:177] * Done! kubectl is now configured to use "addons-620000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 18:08:58 addons-620000 dockerd[1275]: time="2024-09-15T18:08:58.089894561Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:02 addons-620000 dockerd[1275]: time="2024-09-15T18:09:02.157101034Z" level=info msg="shim disconnected" id=5416f3dac2b03b9eccb929f17eea57e3a89b65a780eb946dddbeea6844749268 namespace=moby
	Sep 15 18:09:02 addons-620000 dockerd[1275]: time="2024-09-15T18:09:02.157304314Z" level=warning msg="cleaning up after shim disconnected" id=5416f3dac2b03b9eccb929f17eea57e3a89b65a780eb946dddbeea6844749268 namespace=moby
	Sep 15 18:09:02 addons-620000 dockerd[1275]: time="2024-09-15T18:09:02.157323305Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:02 addons-620000 dockerd[1269]: time="2024-09-15T18:09:02.157806827Z" level=info msg="ignoring event" container=5416f3dac2b03b9eccb929f17eea57e3a89b65a780eb946dddbeea6844749268 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1269]: time="2024-09-15T18:09:03.221490559Z" level=info msg="ignoring event" container=624497174cfb74bfc83fe5ad9198484f6bc889db96b3878b5e98030cbdb89a6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.221867631Z" level=info msg="shim disconnected" id=624497174cfb74bfc83fe5ad9198484f6bc889db96b3878b5e98030cbdb89a6f namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.221899949Z" level=warning msg="cleaning up after shim disconnected" id=624497174cfb74bfc83fe5ad9198484f6bc889db96b3878b5e98030cbdb89a6f namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.221921647Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1269]: time="2024-09-15T18:09:03.376448580Z" level=info msg="ignoring event" container=d5846977d142cebef36c778621152c8689aa45542685469b73789c5f9d31c24d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.376941098Z" level=info msg="shim disconnected" id=d5846977d142cebef36c778621152c8689aa45542685469b73789c5f9d31c24d namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.377077242Z" level=warning msg="cleaning up after shim disconnected" id=d5846977d142cebef36c778621152c8689aa45542685469b73789c5f9d31c24d namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.377082698Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1269]: time="2024-09-15T18:09:03.401231789Z" level=info msg="ignoring event" container=a41350be4e675ef125b3f728f593eb814559f43f222e407e65b4b62f41404b2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.409845402Z" level=info msg="shim disconnected" id=a41350be4e675ef125b3f728f593eb814559f43f222e407e65b4b62f41404b2f namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.409934527Z" level=warning msg="cleaning up after shim disconnected" id=a41350be4e675ef125b3f728f593eb814559f43f222e407e65b4b62f41404b2f namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.409953726Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1269]: time="2024-09-15T18:09:03.462529146Z" level=info msg="ignoring event" container=6d6750fdeadabeb00fdf0ade047e96a1eedbaefbc95d9caddf8b981ffea51cb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.462674994Z" level=info msg="shim disconnected" id=6d6750fdeadabeb00fdf0ade047e96a1eedbaefbc95d9caddf8b981ffea51cb1 namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.462702398Z" level=warning msg="cleaning up after shim disconnected" id=6d6750fdeadabeb00fdf0ade047e96a1eedbaefbc95d9caddf8b981ffea51cb1 namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.462706437Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1269]: time="2024-09-15T18:09:03.509526649Z" level=info msg="ignoring event" container=09b9ae4bc32161cb59018f0b85f3bd94230f63f145cab14a15afd290968dda56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.509603280Z" level=info msg="shim disconnected" id=09b9ae4bc32161cb59018f0b85f3bd94230f63f145cab14a15afd290968dda56 namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.509633724Z" level=warning msg="cleaning up after shim disconnected" id=09b9ae4bc32161cb59018f0b85f3bd94230f63f145cab14a15afd290968dda56 namespace=moby
	Sep 15 18:09:03 addons-620000 dockerd[1275]: time="2024-09-15T18:09:03.509637722Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	edfa195bfe6fc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   dcba8b9c94e84       gcp-auth-89d5ffd79-cm2tn
	1b300fecd7d40       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   4e4b9dc121274       ingress-nginx-controller-bc57996ff-5vchs
	958676b376205       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   18f0bf6e20501       ingress-nginx-admission-patch-v9nqh
	1cab43ae8e1f9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   8b74bc33c6ed1       ingress-nginx-admission-create-4qrfg
	db5f0c5eb934d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago      Running             yakd                       0                   a25f8e2c5bc7a       yakd-dashboard-67d98fc6b-zbnv6
	a41350be4e675       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy             0                   09b9ae4bc3216       registry-proxy-7jd6c
	fae437b0b187e       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   ff040699dec52       nvidia-device-plugin-daemonset-xbbmc
	d5846977d142c       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   6d6750fdeadab       registry-66c9cd494c-66pdh
	b21cabf89fe00       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   888a347729472       cloud-spanner-emulator-769b77f747-56nj4
	d37bf0d812c31       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   668647730b11c       local-path-provisioner-86d989889c-rkmw7
	209d75b603db8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   a758fed13c135       kube-ingress-dns-minikube
	2ab7854a9ce85       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   95104b2c0fe34       storage-provisioner
	174f8fdc3828a       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   6e88cc2e96655       kube-proxy-d5j2x
	37fdf70c8752f       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   ccde1f8936c83       coredns-7c65d6cfc9-jjr57
	41eb98405d549       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   caad341c284ad       etcd-addons-620000
	6eba599541693       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   15fe7e4e59a64       kube-scheduler-addons-620000
	f4cd38ddf162a       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   0dcbab0f3379b       kube-controller-manager-addons-620000
	f1697f05a4d6b       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   03d2fe8642611       kube-apiserver-addons-620000
	
	
	==> controller_ingress [1b300fecd7d4] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0915 17:57:51.000012       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0915 17:57:51.000101       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0915 17:57:51.003069       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0915 17:57:51.184364       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0915 17:57:51.192190       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0915 17:57:51.197496       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0915 17:57:51.207156       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"fa0a49cd-5e72-4f9a-ade6-16fb57c1702c", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0915 17:57:51.208450       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"53fce08f-e6f3-41f0-bf74-22972780dfb5", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0915 17:57:51.208487       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"bd1cc653-a0f9-485d-a12e-13aafdc77607", APIVersion:"v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0915 17:57:52.398828       7 nginx.go:317] "Starting NGINX process"
	I0915 17:57:52.399041       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0915 17:57:52.400119       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0915 17:57:52.400316       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 17:57:52.411826       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0915 17:57:52.412002       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-5vchs"
	I0915 17:57:52.416652       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-5vchs" node="addons-620000"
	I0915 17:57:52.426888       7 controller.go:213] "Backend successfully reloaded"
	I0915 17:57:52.426969       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0915 17:57:52.426986       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-5vchs", UID:"1acb219d-be28-4b49-8378-94a079fe9e52", APIVersion:"v1", ResourceVersion:"1257", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [37fdf70c8752] <==
	[INFO] plugin/kubernetes: Trace[2117756428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 17:56:36.561) (total time: 30000ms):
	Trace[2117756428]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:57:06.562)
	Trace[2117756428]: [30.000868639s] [30.000868639s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1777995401]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 17:56:36.561) (total time: 30000ms):
	Trace[1777995401]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:57:06.562)
	Trace[1777995401]: [30.000900234s] [30.000900234s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1515393883]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 17:56:36.561) (total time: 30000ms):
	Trace[1515393883]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:57:06.562)
	Trace[1515393883]: [30.000731668s] [30.000731668s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 10.244.0.25:58912 - 47130 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000130114s
	[INFO] 10.244.0.25:48144 - 40259 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000037342s
	[INFO] 10.244.0.25:39526 - 34072 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00002859s
	[INFO] 10.244.0.25:49162 - 20105 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000026381s
	[INFO] 10.244.0.25:52733 - 26453 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000024672s
	[INFO] 10.244.0.25:58712 - 11813 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000039093s
	[INFO] 10.244.0.25:46634 - 40808 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001660978s
	[INFO] 10.244.0.25:49699 - 6546 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001950463s
	
	
	==> describe nodes <==
	Name:               addons-620000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-620000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673
	                    minikube.k8s.io/name=addons-620000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T10_56_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-620000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 17:56:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-620000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 18:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 18:05:11 +0000   Sun, 15 Sep 2024 17:56:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 18:05:11 +0000   Sun, 15 Sep 2024 17:56:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 18:05:11 +0000   Sun, 15 Sep 2024 17:56:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 18:05:11 +0000   Sun, 15 Sep 2024 17:56:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-620000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 3422776d469f4f489dd9cfb1cd7dc20f
	  System UUID:                3422776d469f4f489dd9cfb1cd7dc20f
	  Boot ID:                    a424611b-324a-43d0-b9d4-f7248bb4cfa1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-56nj4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gcp-auth                    gcp-auth-89d5ffd79-cm2tn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-5vchs    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-jjr57                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-620000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-620000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-620000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-d5j2x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-620000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-xbbmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-rkmw7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-zbnv6              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-620000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-620000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-620000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-620000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-620000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-620000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-620000 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-620000 event: Registered Node addons-620000 in Controller
	
	
	==> dmesg <==
	[  +6.995855] kauditd_printk_skb: 12 callbacks suppressed
	[Sep15 17:57] kauditd_printk_skb: 4 callbacks suppressed
	[ +15.287849] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.623459] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.107129] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.954014] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.906805] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.326300] kauditd_printk_skb: 22 callbacks suppressed
	[Sep15 17:58] kauditd_printk_skb: 18 callbacks suppressed
	[ +43.247977] kauditd_printk_skb: 2 callbacks suppressed
	[Sep15 17:59] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.544724] kauditd_printk_skb: 15 callbacks suppressed
	[ +21.113846] kauditd_printk_skb: 9 callbacks suppressed
	[ +10.968722] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.460240] kauditd_printk_skb: 20 callbacks suppressed
	[Sep15 18:00] kauditd_printk_skb: 2 callbacks suppressed
	[Sep15 18:03] kauditd_printk_skb: 2 callbacks suppressed
	[Sep15 18:08] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.389000] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.504962] kauditd_printk_skb: 7 callbacks suppressed
	[ +19.545458] kauditd_printk_skb: 10 callbacks suppressed
	[  +7.784940] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.743211] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.445370] kauditd_printk_skb: 6 callbacks suppressed
	[Sep15 18:09] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [41eb98405d54] <==
	{"level":"info","ts":"2024-09-15T17:56:26.881869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 switched to configuration voters=(14154013790752671120)"}
	{"level":"info","ts":"2024-09-15T17:56:26.882126Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2024-09-15T17:56:27.157494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T17:56:27.157586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T17:56:27.157651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-15T17:56:27.157674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T17:56:27.157689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-15T17:56:27.157736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-15T17:56:27.157753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-15T17:56:27.158530Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:56:27.160378Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-620000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T17:56:27.160481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T17:56:27.161095Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T17:56:27.161632Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-15T17:56:27.161836Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:56:27.161868Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:56:27.161880Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T17:56:27.162420Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T17:56:27.167832Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T17:56:27.168301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T17:56:27.169298Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T17:56:27.170184Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T18:06:27.197975Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1900}
	{"level":"info","ts":"2024-09-15T18:06:27.297158Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1900,"took":"96.644349ms","hash":3404367825,"current-db-size-bytes":8728576,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4968448,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-15T18:06:27.297188Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3404367825,"revision":1900,"compact-revision":-1}
	
	
	==> gcp-auth [edfa195bfe6f] <==
	2024/09/15 17:59:12 GCP Auth Webhook started!
	2024/09/15 17:59:28 Ready to marshal response ...
	2024/09/15 17:59:28 Ready to write response ...
	2024/09/15 17:59:29 Ready to marshal response ...
	2024/09/15 17:59:29 Ready to write response ...
	2024/09/15 17:59:51 Ready to marshal response ...
	2024/09/15 17:59:51 Ready to write response ...
	2024/09/15 17:59:51 Ready to marshal response ...
	2024/09/15 17:59:51 Ready to write response ...
	2024/09/15 17:59:52 Ready to marshal response ...
	2024/09/15 17:59:52 Ready to write response ...
	2024/09/15 18:08:03 Ready to marshal response ...
	2024/09/15 18:08:03 Ready to write response ...
	2024/09/15 18:08:08 Ready to marshal response ...
	2024/09/15 18:08:08 Ready to write response ...
	2024/09/15 18:08:35 Ready to marshal response ...
	2024/09/15 18:08:35 Ready to write response ...
	
	
	==> kernel <==
	 18:09:03 up 12 min,  0 users,  load average: 0.50, 0.52, 0.37
	Linux addons-620000 5.10.207 #1 SMP PREEMPT Sun Sep 15 01:47:50 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f1697f05a4d6] <==
	I0915 17:59:42.333346       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0915 17:59:42.351128       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0915 17:59:43.101733       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0915 17:59:43.118039       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0915 17:59:43.332145       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0915 17:59:43.332159       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0915 17:59:43.335537       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0915 17:59:43.351633       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0915 17:59:43.505014       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0915 18:08:15.289354       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0915 18:08:51.334988       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:08:51.335011       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:08:51.345698       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:08:51.345717       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:08:51.358873       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:08:51.358997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:08:51.398616       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:08:51.398776       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 18:08:51.448013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 18:08:51.448026       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 18:08:52.401018       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0915 18:08:52.448685       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 18:08:52.461487       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0915 18:09:02.121671       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 18:09:03.231087       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [f4cd38ddf162] <==
	W0915 18:08:53.261063       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:53.261627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:53.386761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:53.386889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:53.674788       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:53.675350       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:53.858627       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:53.859065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:55.474681       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:55.474785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:55.849923       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:55.850038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:56.084191       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:56.084302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 18:08:56.862051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="2.457µs"
	W0915 18:08:58.769734       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:58.769883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:59.484018       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:59.484142       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:08:59.632041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:08:59.632154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 18:09:01.089892       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 18:09:01.089978       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0915 18:09:03.232199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 18:09:03.348971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="1.541µs"
	
	
	==> kube-proxy [174f8fdc3828] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 17:56:36.681635       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 17:56:36.688537       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0915 17:56:36.688569       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 17:56:36.760086       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 17:56:36.760106       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 17:56:36.760123       1 server_linux.go:169] "Using iptables Proxier"
	I0915 17:56:36.762027       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 17:56:36.762132       1 server.go:483] "Version info" version="v1.31.1"
	I0915 17:56:36.762138       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 17:56:36.763558       1 config.go:199] "Starting service config controller"
	I0915 17:56:36.763569       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 17:56:36.763581       1 config.go:105] "Starting endpoint slice config controller"
	I0915 17:56:36.763583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 17:56:36.765602       1 config.go:328] "Starting node config controller"
	I0915 17:56:36.765608       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 17:56:36.864598       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 17:56:36.864627       1 shared_informer.go:320] Caches are synced for service config
	I0915 17:56:36.865724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6eba59954169] <==
	W0915 17:56:28.243809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 17:56:28.244625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.243832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 17:56:28.244636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 17:56:28.244670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244685       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 17:56:28.244693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 17:56:28.244736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 17:56:28.244816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 17:56:28.244835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244858       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 17:56:28.244866       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:28.244921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 17:56:28.244931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:29.058363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 17:56:29.058392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:29.067018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 17:56:29.067043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 17:56:29.080757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 17:56:29.080780       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0915 17:56:29.241674       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.224270    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-run" (OuterVolumeSpecName: "run") pod "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6" (UID: "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.224277    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-modules" (OuterVolumeSpecName: "modules") pod "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6" (UID: "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.224284    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-bpffs" (OuterVolumeSpecName: "bpffs") pod "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6" (UID: "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.224290    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-cgroup" (OuterVolumeSpecName: "cgroup") pod "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6" (UID: "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.227691    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-kube-api-access-6gzlx" (OuterVolumeSpecName: "kube-api-access-6gzlx") pod "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6" (UID: "9a02ea4f-ce76-48fc-93fa-55b3c9239cc6"). InnerVolumeSpecName "kube-api-access-6gzlx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324495    2030 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-host\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324518    2030 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-modules\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324526    2030 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6gzlx\" (UniqueName: \"kubernetes.io/projected/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-kube-api-access-6gzlx\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324534    2030 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-run\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324540    2030 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-debugfs\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324547    2030 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-cgroup\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:02 addons-620000 kubelet[2030]: I0915 18:09:02.324552    2030 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/9a02ea4f-ce76-48fc-93fa-55b3c9239cc6-bpffs\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.195379    2030 scope.go:117] "RemoveContainer" containerID="1184e70b58a199bf2ef3a22b8ae88dab12985353aa143bbecb88f4b2e7fd75d6"
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.342640    2030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/72eb6068-50ed-447f-812c-29d6f874299f-gcp-creds\") pod \"72eb6068-50ed-447f-812c-29d6f874299f\" (UID: \"72eb6068-50ed-447f-812c-29d6f874299f\") "
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.342665    2030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk7wv\" (UniqueName: \"kubernetes.io/projected/72eb6068-50ed-447f-812c-29d6f874299f-kube-api-access-jk7wv\") pod \"72eb6068-50ed-447f-812c-29d6f874299f\" (UID: \"72eb6068-50ed-447f-812c-29d6f874299f\") "
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.342900    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72eb6068-50ed-447f-812c-29d6f874299f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "72eb6068-50ed-447f-812c-29d6f874299f" (UID: "72eb6068-50ed-447f-812c-29d6f874299f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.346762    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72eb6068-50ed-447f-812c-29d6f874299f-kube-api-access-jk7wv" (OuterVolumeSpecName: "kube-api-access-jk7wv") pod "72eb6068-50ed-447f-812c-29d6f874299f" (UID: "72eb6068-50ed-447f-812c-29d6f874299f"). InnerVolumeSpecName "kube-api-access-jk7wv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.443431    2030 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jk7wv\" (UniqueName: \"kubernetes.io/projected/72eb6068-50ed-447f-812c-29d6f874299f-kube-api-access-jk7wv\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.443449    2030 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/72eb6068-50ed-447f-812c-29d6f874299f-gcp-creds\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.543748    2030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxzw5\" (UniqueName: \"kubernetes.io/projected/95e9f23d-5878-4962-aaec-4a917383b9a2-kube-api-access-zxzw5\") pod \"95e9f23d-5878-4962-aaec-4a917383b9a2\" (UID: \"95e9f23d-5878-4962-aaec-4a917383b9a2\") "
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.544427    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e9f23d-5878-4962-aaec-4a917383b9a2-kube-api-access-zxzw5" (OuterVolumeSpecName: "kube-api-access-zxzw5") pod "95e9f23d-5878-4962-aaec-4a917383b9a2" (UID: "95e9f23d-5878-4962-aaec-4a917383b9a2"). InnerVolumeSpecName "kube-api-access-zxzw5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.644007    2030 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5scg\" (UniqueName: \"kubernetes.io/projected/07c364fc-3808-4c42-a919-efc8d8fd3ddc-kube-api-access-c5scg\") pod \"07c364fc-3808-4c42-a919-efc8d8fd3ddc\" (UID: \"07c364fc-3808-4c42-a919-efc8d8fd3ddc\") "
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.644049    2030 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zxzw5\" (UniqueName: \"kubernetes.io/projected/95e9f23d-5878-4962-aaec-4a917383b9a2-kube-api-access-zxzw5\") on node \"addons-620000\" DevicePath \"\""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.644818    2030 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c364fc-3808-4c42-a919-efc8d8fd3ddc-kube-api-access-c5scg" (OuterVolumeSpecName: "kube-api-access-c5scg") pod "07c364fc-3808-4c42-a919-efc8d8fd3ddc" (UID: "07c364fc-3808-4c42-a919-efc8d8fd3ddc"). InnerVolumeSpecName "kube-api-access-c5scg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:09:03 addons-620000 kubelet[2030]: I0915 18:09:03.744298    2030 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-c5scg\" (UniqueName: \"kubernetes.io/projected/07c364fc-3808-4c42-a919-efc8d8fd3ddc-kube-api-access-c5scg\") on node \"addons-620000\" DevicePath \"\""
	
	
	==> storage-provisioner [2ab7854a9ce8] <==
	I0915 17:56:37.819991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 17:56:37.832127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 17:56:37.832167       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 17:56:37.837563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 17:56:37.839739       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-620000_f89f6cdd-eb84-4440-82a4-6a94b9f793ee!
	I0915 17:56:37.840493       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7e3c311-0d0c-4ef4-9c7b-a924d75b4d0d", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-620000_f89f6cdd-eb84-4440-82a4-6a94b9f793ee became leader
	I0915 17:56:37.940029       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-620000_f89f6cdd-eb84-4440-82a4-6a94b9f793ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-620000 -n addons-620000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-620000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-test ingress-nginx-admission-create-4qrfg ingress-nginx-admission-patch-v9nqh registry-proxy-7jd6c
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-620000 describe pod busybox registry-test ingress-nginx-admission-create-4qrfg ingress-nginx-admission-patch-v9nqh registry-proxy-7jd6c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-620000 describe pod busybox registry-test ingress-nginx-admission-create-4qrfg ingress-nginx-admission-patch-v9nqh registry-proxy-7jd6c: exit status 1 (49.429042ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-620000/192.168.105.2
	Start Time:       Sun, 15 Sep 2024 10:59:51 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b25d2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b25d2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to addons-620000
	  Normal   Pulling    7m46s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m46s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m46s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-test" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-4qrfg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-v9nqh" not found
	Error from server (NotFound): pods "registry-proxy-7jd6c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-620000 describe pod busybox registry-test ingress-nginx-admission-create-4qrfg ingress-nginx-admission-patch-v9nqh registry-proxy-7jd6c: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.30s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-255000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-255000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.86472s)

                                                
                                                
-- stdout --
	* [cert-options-255000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-255000" primary control-plane node in "cert-options-255000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-255000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-255000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-255000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-255000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-255000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.793209ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-255000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-255000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-255000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-255000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-255000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.523833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-255000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-255000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-255000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-255000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-15 11:42:12.443133 -0700 PDT m=+2798.292625501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-255000 -n cert-options-255000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-255000 -n cert-options-255000: exit status 7 (30.7935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-255000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-255000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (195.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.937190125s)

                                                
                                                
-- stdout --
	* [cert-expiration-621000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-621000" primary control-plane node in "cert-expiration-621000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-621000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222817625s)

                                                
                                                
-- stdout --
	* [cert-expiration-621000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-621000" primary control-plane node in "cert-expiration-621000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-621000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-621000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-621000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-621000" primary control-plane node in "cert-expiration-621000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-621000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-621000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-15 11:45:12.41083 -0700 PDT m=+2978.262533417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-621000 -n cert-expiration-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-621000 -n cert-expiration-621000: exit status 7 (37.598166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-621000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-621000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-621000
--- FAIL: TestCertExpiration (195.28s)

                                                
                                    
x
+
TestDockerFlags (10.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-824000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-824000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.124900167s)

                                                
                                                
-- stdout --
	* [docker-flags-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-824000" primary control-plane node in "docker-flags-824000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-824000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:41:52.079598    5179 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:41:52.079717    5179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:52.079721    5179 out.go:358] Setting ErrFile to fd 2...
	I0915 11:41:52.079724    5179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:52.079860    5179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:41:52.080947    5179 out.go:352] Setting JSON to false
	I0915 11:41:52.096811    5179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4275,"bootTime":1726421437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:41:52.096887    5179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:41:52.103940    5179 out.go:177] * [docker-flags-824000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:41:52.109894    5179 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:41:52.109970    5179 notify.go:220] Checking for updates...
	I0915 11:41:52.117815    5179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:41:52.120865    5179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:41:52.123909    5179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:41:52.125341    5179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:41:52.128820    5179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:41:52.132173    5179 config.go:182] Loaded profile config "force-systemd-flag-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:52.132245    5179 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:52.132287    5179 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:41:52.136706    5179 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:41:52.143839    5179 start.go:297] selected driver: qemu2
	I0915 11:41:52.143846    5179 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:41:52.143854    5179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:41:52.146171    5179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:41:52.148896    5179 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:41:52.151882    5179 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0915 11:41:52.151898    5179 cni.go:84] Creating CNI manager for ""
	I0915 11:41:52.151924    5179 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:41:52.151928    5179 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:41:52.151964    5179 start.go:340] cluster config:
	{Name:docker-flags-824000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:41:52.156175    5179 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:41:52.164812    5179 out.go:177] * Starting "docker-flags-824000" primary control-plane node in "docker-flags-824000" cluster
	I0915 11:41:52.168829    5179 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:41:52.168844    5179 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:41:52.168855    5179 cache.go:56] Caching tarball of preloaded images
	I0915 11:41:52.168912    5179 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:41:52.168918    5179 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:41:52.168974    5179 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/docker-flags-824000/config.json ...
	I0915 11:41:52.168986    5179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/docker-flags-824000/config.json: {Name:mka086932e94df6308107d60b01082554ff4f055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:41:52.169204    5179 start.go:360] acquireMachinesLock for docker-flags-824000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:52.169241    5179 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "docker-flags-824000"
	I0915 11:41:52.169254    5179 start.go:93] Provisioning new machine with config: &{Name:docker-flags-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:52.169281    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:52.177817    5179 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:52.196666    5179 start.go:159] libmachine.API.Create for "docker-flags-824000" (driver="qemu2")
	I0915 11:41:52.196694    5179 client.go:168] LocalClient.Create starting
	I0915 11:41:52.196769    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:52.196802    5179 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:52.196810    5179 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:52.196847    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:52.196874    5179 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:52.196885    5179 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:52.197248    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:52.355104    5179 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:52.529978    5179 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:52.529985    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:52.530199    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:41:52.539699    5179 main.go:141] libmachine: STDOUT: 
	I0915 11:41:52.539728    5179 main.go:141] libmachine: STDERR: 
	I0915 11:41:52.539788    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2 +20000M
	I0915 11:41:52.547782    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:52.547799    5179 main.go:141] libmachine: STDERR: 
	I0915 11:41:52.547813    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:41:52.547818    5179 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:52.547834    5179 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:52.547861    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:9e:8f:e7:52:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:41:52.549538    5179 main.go:141] libmachine: STDOUT: 
	I0915 11:41:52.549558    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:52.549580    5179 client.go:171] duration metric: took 352.882042ms to LocalClient.Create
	I0915 11:41:54.551750    5179 start.go:128] duration metric: took 2.382481208s to createHost
	I0915 11:41:54.551830    5179 start.go:83] releasing machines lock for "docker-flags-824000", held for 2.382608333s
	W0915 11:41:54.551898    5179 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:54.575805    5179 out.go:177] * Deleting "docker-flags-824000" in qemu2 ...
	W0915 11:41:54.600026    5179 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:54.600042    5179 start.go:729] Will try again in 5 seconds ...
	I0915 11:41:59.602228    5179 start.go:360] acquireMachinesLock for docker-flags-824000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:59.647892    5179 start.go:364] duration metric: took 45.559042ms to acquireMachinesLock for "docker-flags-824000"
	I0915 11:41:59.648057    5179 start.go:93] Provisioning new machine with config: &{Name:docker-flags-824000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-824000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:59.648358    5179 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:59.654014    5179 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:59.705298    5179 start.go:159] libmachine.API.Create for "docker-flags-824000" (driver="qemu2")
	I0915 11:41:59.705353    5179 client.go:168] LocalClient.Create starting
	I0915 11:41:59.705477    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:59.705536    5179 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:59.705556    5179 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:59.705636    5179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:59.705683    5179 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:59.705697    5179 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:59.706279    5179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:59.940867    5179 main.go:141] libmachine: Creating SSH key...
	I0915 11:42:00.099787    5179 main.go:141] libmachine: Creating Disk image...
	I0915 11:42:00.099798    5179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:42:00.099986    5179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:42:00.109205    5179 main.go:141] libmachine: STDOUT: 
	I0915 11:42:00.109220    5179 main.go:141] libmachine: STDERR: 
	I0915 11:42:00.109273    5179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2 +20000M
	I0915 11:42:00.117077    5179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:42:00.117092    5179 main.go:141] libmachine: STDERR: 
	I0915 11:42:00.117104    5179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:42:00.117110    5179 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:42:00.117118    5179 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:42:00.117156    5179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:34:0a:34:ba:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/docker-flags-824000/disk.qcow2
	I0915 11:42:00.118844    5179 main.go:141] libmachine: STDOUT: 
	I0915 11:42:00.118861    5179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:42:00.118878    5179 client.go:171] duration metric: took 413.524625ms to LocalClient.Create
	I0915 11:42:02.121203    5179 start.go:128] duration metric: took 2.472770917s to createHost
	I0915 11:42:02.121305    5179 start.go:83] releasing machines lock for "docker-flags-824000", held for 2.473412542s
	W0915 11:42:02.121706    5179 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-824000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:42:02.142431    5179 out.go:201] 
	W0915 11:42:02.150408    5179 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:42:02.150471    5179 out.go:270] * 
	* 
	W0915 11:42:02.152477    5179 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:42:02.161364    5179 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-824000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-824000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-824000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (74.41975ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-824000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-824000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-824000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-824000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-824000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-824000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (40.842334ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-824000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-824000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-824000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-824000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-824000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-824000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-15 11:42:02.296265 -0700 PDT m=+2788.145632501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-824000 -n docker-flags-824000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-824000 -n docker-flags-824000: exit status 7 (29.517917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-824000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-824000
--- FAIL: TestDockerFlags (10.37s)

                                                
                                    
x
+
TestForceSystemdFlag (10.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-530000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-530000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.205903333s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-530000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-530000" primary control-plane node in "force-systemd-flag-530000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-530000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:41:46.880286    5158 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:41:46.880423    5158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:46.880426    5158 out.go:358] Setting ErrFile to fd 2...
	I0915 11:41:46.880428    5158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:46.880570    5158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:41:46.881657    5158 out.go:352] Setting JSON to false
	I0915 11:41:46.897485    5158 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4269,"bootTime":1726421437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:41:46.897557    5158 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:41:46.903728    5158 out.go:177] * [force-systemd-flag-530000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:41:46.911775    5158 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:41:46.911819    5158 notify.go:220] Checking for updates...
	I0915 11:41:46.921695    5158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:41:46.925779    5158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:41:46.928693    5158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:41:46.931700    5158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:41:46.934750    5158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:41:46.938005    5158 config.go:182] Loaded profile config "force-systemd-env-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:46.938074    5158 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:46.938114    5158 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:41:46.941758    5158 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:41:46.948703    5158 start.go:297] selected driver: qemu2
	I0915 11:41:46.948709    5158 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:41:46.948714    5158 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:41:46.950846    5158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:41:46.953712    5158 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:41:46.956800    5158 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 11:41:46.956813    5158 cni.go:84] Creating CNI manager for ""
	I0915 11:41:46.956832    5158 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:41:46.956837    5158 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:41:46.956860    5158 start.go:340] cluster config:
	{Name:force-systemd-flag-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:41:46.960358    5158 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:41:46.967740    5158 out.go:177] * Starting "force-systemd-flag-530000" primary control-plane node in "force-systemd-flag-530000" cluster
	I0915 11:41:46.971740    5158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:41:46.971765    5158 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:41:46.971771    5158 cache.go:56] Caching tarball of preloaded images
	I0915 11:41:46.971835    5158 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:41:46.971841    5158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:41:46.971897    5158 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/force-systemd-flag-530000/config.json ...
	I0915 11:41:46.971908    5158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/force-systemd-flag-530000/config.json: {Name:mk4fa67000b11853ca21a0e9c75e6419305f04fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:41:46.972354    5158 start.go:360] acquireMachinesLock for force-systemd-flag-530000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:46.972391    5158 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "force-systemd-flag-530000"
	I0915 11:41:46.972403    5158 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:46.972440    5158 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:46.976750    5158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:46.993829    5158 start.go:159] libmachine.API.Create for "force-systemd-flag-530000" (driver="qemu2")
	I0915 11:41:46.993857    5158 client.go:168] LocalClient.Create starting
	I0915 11:41:46.993917    5158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:46.993950    5158 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:46.993960    5158 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:46.993996    5158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:46.994024    5158 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:46.994033    5158 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:46.994510    5158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:47.154560    5158 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:47.254761    5158 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:47.254766    5158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:47.254937    5158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:47.264173    5158 main.go:141] libmachine: STDOUT: 
	I0915 11:41:47.264199    5158 main.go:141] libmachine: STDERR: 
	I0915 11:41:47.264266    5158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2 +20000M
	I0915 11:41:47.272003    5158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:47.272024    5158 main.go:141] libmachine: STDERR: 
	I0915 11:41:47.272039    5158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:47.272042    5158 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:47.272053    5158 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:47.272083    5158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:49:30:20:7f:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:47.273656    5158 main.go:141] libmachine: STDOUT: 
	I0915 11:41:47.273673    5158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:47.273694    5158 client.go:171] duration metric: took 279.834167ms to LocalClient.Create
	I0915 11:41:49.275851    5158 start.go:128] duration metric: took 2.303414875s to createHost
	I0915 11:41:49.275917    5158 start.go:83] releasing machines lock for "force-systemd-flag-530000", held for 2.303545042s
	W0915 11:41:49.275988    5158 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:49.298228    5158 out.go:177] * Deleting "force-systemd-flag-530000" in qemu2 ...
	W0915 11:41:49.324589    5158 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:49.324611    5158 start.go:729] Will try again in 5 seconds ...
	I0915 11:41:54.326831    5158 start.go:360] acquireMachinesLock for force-systemd-flag-530000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:54.551965    5158 start.go:364] duration metric: took 225.033959ms to acquireMachinesLock for "force-systemd-flag-530000"
	I0915 11:41:54.552123    5158 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-530000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-530000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:54.552385    5158 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:54.561838    5158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:54.610629    5158 start.go:159] libmachine.API.Create for "force-systemd-flag-530000" (driver="qemu2")
	I0915 11:41:54.610693    5158 client.go:168] LocalClient.Create starting
	I0915 11:41:54.610858    5158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:54.610927    5158 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:54.610943    5158 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:54.611021    5158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:54.611067    5158 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:54.611079    5158 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:54.611747    5158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:54.839778    5158 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:54.977664    5158 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:54.977670    5158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:54.977869    5158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:54.987251    5158 main.go:141] libmachine: STDOUT: 
	I0915 11:41:54.987271    5158 main.go:141] libmachine: STDERR: 
	I0915 11:41:54.987334    5158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2 +20000M
	I0915 11:41:54.995116    5158 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:54.995131    5158 main.go:141] libmachine: STDERR: 
	I0915 11:41:54.995147    5158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:54.995152    5158 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:54.995162    5158 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:54.995192    5158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:7a:68:58:92:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-flag-530000/disk.qcow2
	I0915 11:41:54.996792    5158 main.go:141] libmachine: STDOUT: 
	I0915 11:41:54.996807    5158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:54.996839    5158 client.go:171] duration metric: took 386.144375ms to LocalClient.Create
	I0915 11:41:56.999072    5158 start.go:128] duration metric: took 2.446665083s to createHost
	I0915 11:41:56.999137    5158 start.go:83] releasing machines lock for "force-systemd-flag-530000", held for 2.447156125s
	W0915 11:41:56.999428    5158 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:57.022966    5158 out.go:201] 
	W0915 11:41:57.031030    5158 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:41:57.031075    5158 out.go:270] * 
	* 
	W0915 11:41:57.033852    5158 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:41:57.044880    5158 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-530000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-530000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-530000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (77.326625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-530000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-530000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-530000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-15 11:41:57.139119 -0700 PDT m=+2782.988423042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-530000 -n force-systemd-flag-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-530000 -n force-systemd-flag-530000: exit status 7 (34.747833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-530000
--- FAIL: TestForceSystemdFlag (10.40s)

                                                
                                    
x
+
TestForceSystemdEnv (12.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-014000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-014000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.88046875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-014000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-014000" primary control-plane node in "force-systemd-env-014000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-014000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:41:40.007403    5123 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:41:40.007523    5123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:40.007526    5123 out.go:358] Setting ErrFile to fd 2...
	I0915 11:41:40.007528    5123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:41:40.007662    5123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:41:40.008834    5123 out.go:352] Setting JSON to false
	I0915 11:41:40.024807    5123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4263,"bootTime":1726421437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:41:40.024886    5123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:41:40.032090    5123 out.go:177] * [force-systemd-env-014000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:41:40.041045    5123 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:41:40.041066    5123 notify.go:220] Checking for updates...
	I0915 11:41:40.048101    5123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:41:40.051061    5123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:41:40.054083    5123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:41:40.057075    5123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:41:40.058603    5123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0915 11:41:40.062329    5123 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:41:40.062380    5123 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:41:40.067062    5123 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:41:40.073073    5123 start.go:297] selected driver: qemu2
	I0915 11:41:40.073082    5123 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:41:40.073090    5123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:41:40.075396    5123 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:41:40.078180    5123 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:41:40.081102    5123 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 11:41:40.081116    5123 cni.go:84] Creating CNI manager for ""
	I0915 11:41:40.081137    5123 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:41:40.081145    5123 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:41:40.081171    5123 start.go:340] cluster config:
	{Name:force-systemd-env-014000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:41:40.084840    5123 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:41:40.093123    5123 out.go:177] * Starting "force-systemd-env-014000" primary control-plane node in "force-systemd-env-014000" cluster
	I0915 11:41:40.097050    5123 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:41:40.097067    5123 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:41:40.097081    5123 cache.go:56] Caching tarball of preloaded images
	I0915 11:41:40.097139    5123 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:41:40.097148    5123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:41:40.097206    5123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/force-systemd-env-014000/config.json ...
	I0915 11:41:40.097218    5123 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/force-systemd-env-014000/config.json: {Name:mkf6bad128fbb39011d4bb9ae343a8104c234ec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:41:40.097436    5123 start.go:360] acquireMachinesLock for force-systemd-env-014000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:40.097472    5123 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "force-systemd-env-014000"
	I0915 11:41:40.097483    5123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:40.097517    5123 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:40.105088    5123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:40.123174    5123 start.go:159] libmachine.API.Create for "force-systemd-env-014000" (driver="qemu2")
	I0915 11:41:40.123209    5123 client.go:168] LocalClient.Create starting
	I0915 11:41:40.123276    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:40.123306    5123 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:40.123317    5123 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:40.123358    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:40.123382    5123 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:40.123391    5123 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:40.123744    5123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:40.282107    5123 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:40.386052    5123 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:40.386057    5123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:40.386209    5123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:40.395149    5123 main.go:141] libmachine: STDOUT: 
	I0915 11:41:40.395163    5123 main.go:141] libmachine: STDERR: 
	I0915 11:41:40.395220    5123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2 +20000M
	I0915 11:41:40.403029    5123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:40.403045    5123 main.go:141] libmachine: STDERR: 
	I0915 11:41:40.403062    5123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:40.403067    5123 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:40.403082    5123 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:40.403115    5123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:24:eb:ab:cc:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:40.404768    5123 main.go:141] libmachine: STDOUT: 
	I0915 11:41:40.404781    5123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:40.404808    5123 client.go:171] duration metric: took 281.596125ms to LocalClient.Create
	I0915 11:41:42.406866    5123 start.go:128] duration metric: took 2.309368292s to createHost
	I0915 11:41:42.406911    5123 start.go:83] releasing machines lock for "force-systemd-env-014000", held for 2.309442167s
	W0915 11:41:42.406926    5123 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:42.415982    5123 out.go:177] * Deleting "force-systemd-env-014000" in qemu2 ...
	W0915 11:41:42.429805    5123 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:42.429822    5123 start.go:729] Will try again in 5 seconds ...
	I0915 11:41:47.431959    5123 start.go:360] acquireMachinesLock for force-systemd-env-014000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:49.276032    5123 start.go:364] duration metric: took 1.844042167s to acquireMachinesLock for "force-systemd-env-014000"
	I0915 11:41:49.276217    5123 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:49.276658    5123 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:49.291140    5123 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0915 11:41:49.343103    5123 start.go:159] libmachine.API.Create for "force-systemd-env-014000" (driver="qemu2")
	I0915 11:41:49.343165    5123 client.go:168] LocalClient.Create starting
	I0915 11:41:49.343345    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:49.343429    5123 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:49.343449    5123 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:49.343521    5123 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:49.343565    5123 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:49.343579    5123 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:49.344198    5123 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:49.649983    5123 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:49.784070    5123 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:49.784077    5123 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:49.784295    5123 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:49.793950    5123 main.go:141] libmachine: STDOUT: 
	I0915 11:41:49.793966    5123 main.go:141] libmachine: STDERR: 
	I0915 11:41:49.794035    5123 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2 +20000M
	I0915 11:41:49.801879    5123 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:49.801895    5123 main.go:141] libmachine: STDERR: 
	I0915 11:41:49.801907    5123 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:49.801912    5123 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:49.801922    5123 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:49.801952    5123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:fa:90:74:da:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/force-systemd-env-014000/disk.qcow2
	I0915 11:41:49.803505    5123 main.go:141] libmachine: STDOUT: 
	I0915 11:41:49.803518    5123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:49.803533    5123 client.go:171] duration metric: took 460.356708ms to LocalClient.Create
	I0915 11:41:51.805820    5123 start.go:128] duration metric: took 2.529123875s to createHost
	I0915 11:41:51.805903    5123 start.go:83] releasing machines lock for "force-systemd-env-014000", held for 2.529857541s
	W0915 11:41:51.806302    5123 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-014000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-014000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:51.826900    5123 out.go:201] 
	W0915 11:41:51.830926    5123 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:41:51.830955    5123 out.go:270] * 
	* 
	W0915 11:41:51.833485    5123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:41:51.842899    5123 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-014000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-014000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-014000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (75.590041ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-014000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-014000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-15 11:41:51.936052 -0700 PDT m=+2777.785292126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-014000 -n force-systemd-env-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-014000 -n force-systemd-env-014000: exit status 7 (34.7215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-014000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-014000
--- FAIL: TestForceSystemdEnv (12.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-737000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-737000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-qw7xk" [8d47fbb8-02f8-4222-8c32-267d2c2616b7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-qw7xk" [8d47fbb8-02f8-4222-8c32-267d2c2616b7] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.010330666s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31125
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
E0915 11:14:54.088631    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
2024/09/15 11:15:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1661: error fetching http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31125: Get "http://192.168.105.4:31125": dial tcp 192.168.105.4:31125: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-737000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-qw7xk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-737000/192.168.105.4
Start Time:       Sun, 15 Sep 2024 11:14:41 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://326d90ab2d741013c1e3297fdb95c5b80f5651ad4b90eb398fea796484a53aa0
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sun, 15 Sep 2024 11:15:02 -0700
Finished:     Sun, 15 Sep 2024 11:15:02 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xfhcm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xfhcm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-qw7xk to functional-737000
Normal   Pulling    39s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.039s (3.039s including waiting). Image size: 84957542 bytes.
Normal   Created    18s (x3 over 36s)  kubelet            Created container echoserver-arm
Normal   Started    18s (x3 over 36s)  kubelet            Started container echoserver-arm
Normal   Pulled     18s (x2 over 35s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    3s (x4 over 34s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-qw7xk_default(8d47fbb8-02f8-4222-8c32-267d2c2616b7)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-737000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-737000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.50.150
IPs:                      10.110.50.150
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31125/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-737000 -n functional-737000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-737000 ssh stat                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh sudo                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3481278637/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh -- ls                                                                                          | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh sudo                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-737000 ssh findmnt                                                                                        | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-737000                                                                                                 | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-737000 --dry-run                                                                                       | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | -p functional-737000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| license   |                                                                                                                      | minikube          | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	| ssh       | functional-737000 ssh sudo                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|           | systemctl is-active crio                                                                                             |                   |         |         |                     |                     |
	| image     | functional-737000 image load --daemon                                                                                | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | kicbase/echo-server:functional-737000                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image     | functional-737000 image ls                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	| image     | functional-737000 image load --daemon                                                                                | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT | 15 Sep 24 11:15 PDT |
	|           | kicbase/echo-server:functional-737000                                                                                |                   |         |         |                     |                     |
	|           | --alsologtostderr                                                                                                    |                   |         |         |                     |                     |
	| image     | functional-737000 image ls                                                                                           | functional-737000 | jenkins | v1.34.0 | 15 Sep 24 11:15 PDT |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 11:15:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 11:15:09.891792    3389 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:15:09.891917    3389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.891920    3389 out.go:358] Setting ErrFile to fd 2...
	I0915 11:15:09.891922    3389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.892056    3389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:15:09.893088    3389 out.go:352] Setting JSON to false
	I0915 11:15:09.909612    3389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2672,"bootTime":1726421437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:15:09.909680    3389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:15:09.914938    3389 out.go:177] * [functional-737000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:15:09.921893    3389 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:15:09.921961    3389 notify.go:220] Checking for updates...
	I0915 11:15:09.928857    3389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:15:09.931897    3389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:15:09.934886    3389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:15:09.936334    3389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:15:09.938871    3389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:15:09.942218    3389 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:15:09.942457    3389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:15:09.946722    3389 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:15:09.953890    3389 start.go:297] selected driver: qemu2
	I0915 11:15:09.953896    3389 start.go:901] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:15:09.953952    3389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:15:09.956322    3389 cni.go:84] Creating CNI manager for ""
	I0915 11:15:09.956354    3389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:15:09.956390    3389 start.go:340] cluster config:
	{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-737000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:15:09.968840    3389 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Sep 15 18:15:10 functional-737000 dockerd[5988]: time="2024-09-15T18:15:10.995609131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:10 functional-737000 dockerd[5988]: time="2024-09-15T18:15:10.995637096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:11 functional-737000 cri-dockerd[6246]: time="2024-09-15T18:15:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8fe7ffd01dc630ef3ea900ec3fdbedc29d50d0f490d973bf79b1b46520e7fed2/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 15 18:15:11 functional-737000 cri-dockerd[6246]: time="2024-09-15T18:15:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65b14a1cdbf6291aa42226fbbfa8c9336775fdf402d4f228171481131305689a/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 15 18:15:11 functional-737000 dockerd[5982]: time="2024-09-15T18:15:11.294752686Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.063686089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.063756021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.063783527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.063867046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.085871212Z" level=info msg="shim disconnected" id=834f095bf22b37b7d7a111eddbbd5257fddc36b9c7f35f273bc820b1c0730ee7 namespace=moby
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.085901468Z" level=warning msg="cleaning up after shim disconnected" id=834f095bf22b37b7d7a111eddbbd5257fddc36b9c7f35f273bc820b1c0730ee7 namespace=moby
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.085905553Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 15 18:15:12 functional-737000 dockerd[5982]: time="2024-09-15T18:15:12.085997198Z" level=info msg="ignoring event" container=834f095bf22b37b7d7a111eddbbd5257fddc36b9c7f35f273bc820b1c0730ee7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 18:15:12 functional-737000 dockerd[5988]: time="2024-09-15T18:15:12.089856629Z" level=warning msg="cleanup warnings time=\"2024-09-15T18:15:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 15 18:15:16 functional-737000 cri-dockerd[6246]: time="2024-09-15T18:15:16Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 15 18:15:17 functional-737000 dockerd[5988]: time="2024-09-15T18:15:17.093671978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 15 18:15:17 functional-737000 dockerd[5988]: time="2024-09-15T18:15:17.093716030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 15 18:15:17 functional-737000 dockerd[5988]: time="2024-09-15T18:15:17.093729241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:17 functional-737000 dockerd[5988]: time="2024-09-15T18:15:17.093763374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:17 functional-737000 dockerd[5982]: time="2024-09-15T18:15:17.206746165Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 15 18:15:18 functional-737000 cri-dockerd[6246]: time="2024-09-15T18:15:18Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 15 18:15:19 functional-737000 dockerd[5988]: time="2024-09-15T18:15:19.027653011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 15 18:15:19 functional-737000 dockerd[5988]: time="2024-09-15T18:15:19.027700563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 15 18:15:19 functional-737000 dockerd[5988]: time="2024-09-15T18:15:19.027862307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:15:19 functional-737000 dockerd[5988]: time="2024-09-15T18:15:19.027921195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	253dd6050abd4       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 seconds ago        Running             dashboard-metrics-scraper   0                   8fe7ffd01dc63       dashboard-metrics-scraper-c5db448b4-lhcr6
	39d46abbe138f       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         4 seconds ago        Running             kubernetes-dashboard        0                   65b14a1cdbf62       kubernetes-dashboard-695b96c756-8pl62
	834f095bf22b3       72565bf5bbedf                                                                                          8 seconds ago        Exited              echoserver-arm              2                   0dba35d2f26a8       hello-node-64b4f8f9ff-kppnr
	716018e3bd10c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    15 seconds ago       Exited              mount-munger                0                   63a1a4092a9b5       busybox-mount
	326d90ab2d741       72565bf5bbedf                                                                                          18 seconds ago       Exited              echoserver-arm              2                   792a4248b6a88       hello-node-connect-65d86f57f4-qw7xk
	7966a2455789d       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                          32 seconds ago       Running             myfrontend                  0                   e2a19d6a69bcd       sp-pod
	156089c9ff9e6       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                          46 seconds ago       Running             nginx                       0                   36d51f772db23       nginx-svc
	46b7fba1f8cb0       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   a6cb6f5f2ad6d       storage-provisioner
	b229a8989eb6d       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   6da44f7ec4966       coredns-7c65d6cfc9-vwrd8
	052d0a74d6d8b       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         2                   a6cb6f5f2ad6d       storage-provisioner
	ff33c09dbf704       24a140c548c07                                                                                          About a minute ago   Running             kube-proxy                  2                   fd55332d0930c       kube-proxy-ql4nq
	127209af604c2       7f8aa378bb47d                                                                                          About a minute ago   Running             kube-scheduler              2                   ceb47d5e6dd61       kube-scheduler-functional-737000
	ac2610fe0dbbc       279f381cb3736                                                                                          About a minute ago   Running             kube-controller-manager     2                   a8a87045097c9       kube-controller-manager-functional-737000
	503502d66ac1b       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   cdd8d0e4fd192       etcd-functional-737000
	39bd6183a6796       d3f53a98c0a9d                                                                                          About a minute ago   Running             kube-apiserver              0                   0918479a2f0d9       kube-apiserver-functional-737000
	e0a5e193c3f77       2f6c962e7b831                                                                                          2 minutes ago        Exited              coredns                     1                   a66393a9ae90e       coredns-7c65d6cfc9-vwrd8
	391318f03d012       24a140c548c07                                                                                          2 minutes ago        Exited              kube-proxy                  1                   434dc9012282d       kube-proxy-ql4nq
	cc171da6cd489       7f8aa378bb47d                                                                                          2 minutes ago        Exited              kube-scheduler              1                   679e0f8a32189       kube-scheduler-functional-737000
	00b4267d905ec       279f381cb3736                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   060542ba51cba       kube-controller-manager-functional-737000
	8c8540b93468b       27e3830e14027                                                                                          2 minutes ago        Exited              etcd                        1                   bd741e27f91bf       etcd-functional-737000
	
	
	==> coredns [b229a8989eb6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35890 - 55706 "HINFO IN 1477574802834898850.7909584803851224253. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005231859s
	[INFO] 10.244.0.1:13442 - 24627 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000132865s
	[INFO] 10.244.0.1:33102 - 28232 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000138908s
	[INFO] 10.244.0.1:34354 - 19170 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00123475s
	[INFO] 10.244.0.1:5474 - 8413 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000020171s
	[INFO] 10.244.0.1:57792 - 31034 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000041135s
	[INFO] 10.244.0.1:43176 - 232 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000187503s
	
	
	==> coredns [e0a5e193c3f7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46022 - 18374 "HINFO IN 645634257124844869.4906505422045697929. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004958127s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-737000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-737000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673
	                    minikube.k8s.io/name=functional-737000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T11_12_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 18:12:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-737000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 18:15:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 18:15:06 +0000   Sun, 15 Sep 2024 18:12:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 18:15:06 +0000   Sun, 15 Sep 2024 18:12:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 18:15:06 +0000   Sun, 15 Sep 2024 18:12:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 18:15:06 +0000   Sun, 15 Sep 2024 18:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-737000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 caf3c7a312394408b0b375c79faa926e
	  System UUID:                caf3c7a312394408b0b375c79faa926e
	  Boot ID:                    34d632d1-b3c7-4bc9-8e40-4c018872a2c7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-kppnr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     hello-node-connect-65d86f57f4-qw7xk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 coredns-7c65d6cfc9-vwrd8                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m58s
	  kube-system                 etcd-functional-737000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m5s
	  kube-system                 kube-apiserver-functional-737000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-functional-737000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 kube-proxy-ql4nq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kube-scheduler-functional-737000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m4s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-lhcr6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-8pl62        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  Starting                 74s                  kube-proxy       
	  Normal  Starting                 2m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s                 kubelet          Node functional-737000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    3m4s                 kubelet          Node functional-737000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s                 kubelet          Node functional-737000 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m4s                 kubelet          Starting kubelet.
	  Normal  NodeReady                3m                   kubelet          Node functional-737000 status is now: NodeReady
	  Normal  RegisteredNode           2m59s                node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node functional-737000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node functional-737000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node functional-737000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                 node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	  Normal  Starting                 80s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)    kubelet          Node functional-737000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)    kubelet          Node functional-737000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)    kubelet          Node functional-737000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                  node-controller  Node functional-737000 event: Registered Node functional-737000 in Controller
	
	
	==> dmesg <==
	[  +0.058625] kauditd_printk_skb: 33 callbacks suppressed
	[ +12.152890] systemd-fstab-generator[5506]: Ignoring "noauto" option for root device
	[  +0.052725] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.099982] systemd-fstab-generator[5541]: Ignoring "noauto" option for root device
	[  +0.091414] systemd-fstab-generator[5553]: Ignoring "noauto" option for root device
	[  +0.090641] systemd-fstab-generator[5567]: Ignoring "noauto" option for root device
	[  +5.107615] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.353138] systemd-fstab-generator[6194]: Ignoring "noauto" option for root device
	[  +0.100269] systemd-fstab-generator[6206]: Ignoring "noauto" option for root device
	[  +0.071143] systemd-fstab-generator[6218]: Ignoring "noauto" option for root device
	[  +0.084558] systemd-fstab-generator[6233]: Ignoring "noauto" option for root device
	[  +0.207134] systemd-fstab-generator[6404]: Ignoring "noauto" option for root device
	[Sep15 18:14] systemd-fstab-generator[6526]: Ignoring "noauto" option for root device
	[  +4.409218] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.768559] kauditd_printk_skb: 34 callbacks suppressed
	[  +2.826002] systemd-fstab-generator[7621]: Ignoring "noauto" option for root device
	[  +4.368071] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.663410] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.052274] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.455994] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.932678] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.570970] kauditd_printk_skb: 15 callbacks suppressed
	[Sep15 18:15] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.009282] kauditd_printk_skb: 17 callbacks suppressed
	[  +9.993776] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [503502d66ac1] <==
	{"level":"info","ts":"2024-09-15T18:14:03.036657Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-15T18:14:03.036699Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:14:03.036747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:14:03.037852Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T18:14:03.038403Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T18:14:03.038462Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-15T18:14:03.038476Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-15T18:14:03.039263Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T18:14:03.039764Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T18:14:04.631102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-15T18:14:04.631283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-15T18:14:04.631373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-15T18:14:04.631408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-15T18:14:04.631425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-15T18:14:04.631450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-15T18:14:04.631476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-15T18:14:04.635978Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-737000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T18:14:04.636318Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:14:04.636659Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T18:14:04.636718Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T18:14:04.636757Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:14:04.638746Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T18:14:04.638751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T18:14:04.641040Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T18:14:04.642419Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [8c8540b93468] <==
	{"level":"info","ts":"2024-09-15T18:13:19.347518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-15T18:13:19.347910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-15T18:13:19.347977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-15T18:13:19.347997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-15T18:13:19.348028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-15T18:13:19.348097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-15T18:13:19.351181Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-737000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T18:13:19.351317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:13:19.351991Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:13:19.353715Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T18:13:19.355842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T18:13:19.356376Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T18:13:19.356577Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T18:13:19.358171Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T18:13:19.373274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-15T18:13:48.086897Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-15T18:13:48.086926Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-737000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-15T18:13:48.086962Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T18:13:48.087005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T18:13:48.094087Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-15T18:13:48.094115Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-15T18:13:48.095366Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-15T18:13:48.096603Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-15T18:13:48.096635Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-15T18:13:48.096639Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-737000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:15:21 up 3 min,  0 users,  load average: 1.49, 0.72, 0.29
	Linux functional-737000 5.10.207 #1 SMP PREEMPT Sun Sep 15 01:47:50 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39bd6183a679] <==
	I0915 18:14:05.247700       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 18:14:05.247743       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 18:14:05.248890       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0915 18:14:05.257513       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 18:14:05.257565       1 aggregator.go:171] initial CRD sync complete...
	I0915 18:14:05.257577       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 18:14:05.257610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 18:14:05.257653       1 cache.go:39] Caches are synced for autoregister controller
	I0915 18:14:06.149692       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0915 18:14:06.251650       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I0915 18:14:06.252588       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 18:14:06.254180       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 18:14:06.589305       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0915 18:14:06.593049       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0915 18:14:06.603315       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0915 18:14:06.610273       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 18:14:06.613379       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 18:14:25.275209       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.246.141"}
	I0915 18:14:30.939170       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.49.191"}
	I0915 18:14:41.380627       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0915 18:14:41.441452       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.50.150"}
	I0915 18:14:54.957803       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.182.48"}
	I0915 18:15:10.603980       1 controller.go:615] quota admission added evaluator for: namespaces
	I0915 18:15:10.696009       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.115.133"}
	I0915 18:15:10.719536       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.82.48"}
	
	
	==> kube-controller-manager [00b4267d905e] <==
	I0915 18:13:23.410433       1 shared_informer.go:320] Caches are synced for taint
	I0915 18:13:23.410491       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 18:13:23.410544       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-737000"
	I0915 18:13:23.410592       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 18:13:23.410622       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 18:13:23.411937       1 shared_informer.go:320] Caches are synced for PVC protection
	I0915 18:13:23.413666       1 shared_informer.go:320] Caches are synced for job
	I0915 18:13:23.418494       1 shared_informer.go:320] Caches are synced for attach detach
	I0915 18:13:23.427866       1 shared_informer.go:320] Caches are synced for persistent volume
	I0915 18:13:23.427923       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0915 18:13:23.434458       1 shared_informer.go:320] Caches are synced for endpoint
	I0915 18:13:23.452731       1 shared_informer.go:320] Caches are synced for deployment
	I0915 18:13:23.453076       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0915 18:13:23.456789       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0915 18:13:23.456805       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 18:13:23.503507       1 shared_informer.go:320] Caches are synced for HPA
	I0915 18:13:23.503539       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 18:13:23.503546       1 shared_informer.go:320] Caches are synced for disruption
	I0915 18:13:23.503550       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0915 18:13:23.503766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="31.838µs"
	I0915 18:13:23.503555       1 shared_informer.go:320] Caches are synced for GC
	I0915 18:13:23.505367       1 shared_informer.go:320] Caches are synced for ephemeral
	I0915 18:13:23.889707       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 18:13:23.930635       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 18:13:23.930693       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ac2610fe0dbb] <==
	I0915 18:15:02.972505       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="37.925µs"
	I0915 18:15:06.344140       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-737000"
	I0915 18:15:10.634150       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.091703ms"
	E0915 18:15:10.634171       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 18:15:10.636172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.941895ms"
	E0915 18:15:10.636210       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 18:15:10.640587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.709558ms"
	E0915 18:15:10.640605       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 18:15:10.640621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.291116ms"
	E0915 18:15:10.640699       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0915 18:15:10.652440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.094971ms"
	I0915 18:15:10.663495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.026356ms"
	I0915 18:15:10.663533       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.755µs"
	I0915 18:15:10.663556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="20.893789ms"
	I0915 18:15:10.671982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="33.34µs"
	I0915 18:15:10.676834       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.754344ms"
	I0915 18:15:10.676872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.586µs"
	I0915 18:15:10.681793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.003µs"
	I0915 18:15:12.028628       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.467µs"
	I0915 18:15:12.114802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="24.756µs"
	I0915 18:15:17.065693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.589µs"
	I0915 18:15:17.162107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.685575ms"
	I0915 18:15:17.162172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.753µs"
	I0915 18:15:19.177840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.719039ms"
	I0915 18:15:19.177869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.419µs"
	
	
	==> kube-proxy [391318f03d01] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 18:13:20.618824       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 18:13:20.622409       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0915 18:13:20.622481       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 18:13:20.630505       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 18:13:20.630521       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 18:13:20.630592       1 server_linux.go:169] "Using iptables Proxier"
	I0915 18:13:20.631215       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 18:13:20.631288       1 server.go:483] "Version info" version="v1.31.1"
	I0915 18:13:20.631296       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 18:13:20.631734       1 config.go:199] "Starting service config controller"
	I0915 18:13:20.631747       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 18:13:20.631756       1 config.go:105] "Starting endpoint slice config controller"
	I0915 18:13:20.631792       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 18:13:20.631982       1 config.go:328] "Starting node config controller"
	I0915 18:13:20.631989       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 18:13:20.732527       1 shared_informer.go:320] Caches are synced for node config
	I0915 18:13:20.732528       1 shared_informer.go:320] Caches are synced for service config
	I0915 18:13:20.732536       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ff33c09dbf70] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0915 18:14:06.530569       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0915 18:14:06.534211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0915 18:14:06.534241       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 18:14:06.548642       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0915 18:14:06.548672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0915 18:14:06.548686       1 server_linux.go:169] "Using iptables Proxier"
	I0915 18:14:06.551237       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 18:14:06.551423       1 server.go:483] "Version info" version="v1.31.1"
	I0915 18:14:06.551541       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 18:14:06.553329       1 config.go:199] "Starting service config controller"
	I0915 18:14:06.553348       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 18:14:06.553398       1 config.go:105] "Starting endpoint slice config controller"
	I0915 18:14:06.553407       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 18:14:06.553948       1 config.go:328] "Starting node config controller"
	I0915 18:14:06.553974       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 18:14:06.653516       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 18:14:06.653516       1 shared_informer.go:320] Caches are synced for service config
	I0915 18:14:06.654010       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [127209af604c] <==
	I0915 18:14:03.331054       1 serving.go:386] Generated self-signed cert in-memory
	W0915 18:14:05.168828       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 18:14:05.168845       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 18:14:05.168850       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 18:14:05.168853       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 18:14:05.200351       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 18:14:05.200408       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 18:14:05.203796       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 18:14:05.203902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 18:14:05.203952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 18:14:05.203975       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 18:14:05.308004       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cc171da6cd48] <==
	I0915 18:13:18.095147       1 serving.go:386] Generated self-signed cert in-memory
	W0915 18:13:19.888392       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 18:13:19.888580       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 18:13:19.888623       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 18:13:19.888642       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 18:13:19.929542       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 18:13:19.929564       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 18:13:19.931006       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 18:13:19.931070       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 18:13:19.931083       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 18:13:19.931090       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 18:13:20.031277       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 18:13:48.085009       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0915 18:13:48.085041       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0915 18:13:48.085104       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0915 18:13:48.085212       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 18:15:02 functional-737000 kubelet[6533]: I0915 18:15:02.960254    6533 scope.go:117] "RemoveContainer" containerID="326d90ab2d741013c1e3297fdb95c5b80f5651ad4b90eb398fea796484a53aa0"
	Sep 15 18:15:02 functional-737000 kubelet[6533]: E0915 18:15:02.960566    6533 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-qw7xk_default(8d47fbb8-02f8-4222-8c32-267d2c2616b7)\"" pod="default/hello-node-connect-65d86f57f4-qw7xk" podUID="8d47fbb8-02f8-4222-8c32-267d2c2616b7"
	Sep 15 18:15:03 functional-737000 kubelet[6533]: I0915 18:15:03.291992    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-test-volume\") pod \"busybox-mount\" (UID: \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\") " pod="default/busybox-mount"
	Sep 15 18:15:03 functional-737000 kubelet[6533]: I0915 18:15:03.292021    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xjv\" (UniqueName: \"kubernetes.io/projected/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-kube-api-access-d5xjv\") pod \"busybox-mount\" (UID: \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\") " pod="default/busybox-mount"
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.227518    6533 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5xjv\" (UniqueName: \"kubernetes.io/projected/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-kube-api-access-d5xjv\") pod \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\" (UID: \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\") "
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.227549    6533 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-test-volume\") pod \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\" (UID: \"38f1f4c1-21ef-4710-a1a0-564a5a041d9a\") "
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.227586    6533 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-test-volume" (OuterVolumeSpecName: "test-volume") pod "38f1f4c1-21ef-4710-a1a0-564a5a041d9a" (UID: "38f1f4c1-21ef-4710-a1a0-564a5a041d9a"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.230263    6533 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-kube-api-access-d5xjv" (OuterVolumeSpecName: "kube-api-access-d5xjv") pod "38f1f4c1-21ef-4710-a1a0-564a5a041d9a" (UID: "38f1f4c1-21ef-4710-a1a0-564a5a041d9a"). InnerVolumeSpecName "kube-api-access-d5xjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.328051    6533 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d5xjv\" (UniqueName: \"kubernetes.io/projected/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-kube-api-access-d5xjv\") on node \"functional-737000\" DevicePath \"\""
	Sep 15 18:15:07 functional-737000 kubelet[6533]: I0915 18:15:07.328074    6533 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/38f1f4c1-21ef-4710-a1a0-564a5a041d9a-test-volume\") on node \"functional-737000\" DevicePath \"\""
	Sep 15 18:15:08 functional-737000 kubelet[6533]: I0915 18:15:08.067816    6533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a1a4092a9b5704ecaa05e9174f1cc47fa09ae2812d634fb39b1e290a587a54"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: E0915 18:15:10.650087    6533 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="38f1f4c1-21ef-4710-a1a0-564a5a041d9a" containerName="mount-munger"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: I0915 18:15:10.650115    6533 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f1f4c1-21ef-4710-a1a0-564a5a041d9a" containerName="mount-munger"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: I0915 18:15:10.749760    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d0f380f0-3494-4562-8d0d-644d4857b6e7-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-lhcr6\" (UID: \"d0f380f0-3494-4562-8d0d-644d4857b6e7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-lhcr6"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: I0915 18:15:10.749789    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fnw\" (UniqueName: \"kubernetes.io/projected/d0f380f0-3494-4562-8d0d-644d4857b6e7-kube-api-access-44fnw\") pod \"dashboard-metrics-scraper-c5db448b4-lhcr6\" (UID: \"d0f380f0-3494-4562-8d0d-644d4857b6e7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-lhcr6"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: I0915 18:15:10.749801    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1e83717-423c-4287-abd2-59ae427008c3-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-8pl62\" (UID: \"c1e83717-423c-4287-abd2-59ae427008c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-8pl62"
	Sep 15 18:15:10 functional-737000 kubelet[6533]: I0915 18:15:10.749813    6533 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgs5\" (UniqueName: \"kubernetes.io/projected/c1e83717-423c-4287-abd2-59ae427008c3-kube-api-access-wfgs5\") pod \"kubernetes-dashboard-695b96c756-8pl62\" (UID: \"c1e83717-423c-4287-abd2-59ae427008c3\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-8pl62"
	Sep 15 18:15:12 functional-737000 kubelet[6533]: I0915 18:15:12.003417    6533 scope.go:117] "RemoveContainer" containerID="b5dc5aa5a0604efbf272807c13f28984671f78d1051b096925193db82d79f25c"
	Sep 15 18:15:12 functional-737000 kubelet[6533]: I0915 18:15:12.109892    6533 scope.go:117] "RemoveContainer" containerID="b5dc5aa5a0604efbf272807c13f28984671f78d1051b096925193db82d79f25c"
	Sep 15 18:15:12 functional-737000 kubelet[6533]: I0915 18:15:12.110038    6533 scope.go:117] "RemoveContainer" containerID="834f095bf22b37b7d7a111eddbbd5257fddc36b9c7f35f273bc820b1c0730ee7"
	Sep 15 18:15:12 functional-737000 kubelet[6533]: E0915 18:15:12.110102    6533 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-kppnr_default(b4bcf037-e61f-41c0-a225-a1f9516847ff)\"" pod="default/hello-node-64b4f8f9ff-kppnr" podUID="b4bcf037-e61f-41c0-a225-a1f9516847ff"
	Sep 15 18:15:17 functional-737000 kubelet[6533]: I0915 18:15:17.011432    6533 scope.go:117] "RemoveContainer" containerID="326d90ab2d741013c1e3297fdb95c5b80f5651ad4b90eb398fea796484a53aa0"
	Sep 15 18:15:17 functional-737000 kubelet[6533]: E0915 18:15:17.011535    6533 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-qw7xk_default(8d47fbb8-02f8-4222-8c32-267d2c2616b7)\"" pod="default/hello-node-connect-65d86f57f4-qw7xk" podUID="8d47fbb8-02f8-4222-8c32-267d2c2616b7"
	Sep 15 18:15:17 functional-737000 kubelet[6533]: I0915 18:15:17.156676    6533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-8pl62" podStartSLOduration=1.242401417 podStartE2EDuration="7.156665912s" podCreationTimestamp="2024-09-15 18:15:10 +0000 UTC" firstStartedPulling="2024-09-15 18:15:11.07819704 +0000 UTC m=+69.130144555" lastFinishedPulling="2024-09-15 18:15:16.992461535 +0000 UTC m=+75.044409050" observedRunningTime="2024-09-15 18:15:17.15646716 +0000 UTC m=+75.208414675" watchObservedRunningTime="2024-09-15 18:15:17.156665912 +0000 UTC m=+75.208613469"
	Sep 15 18:15:19 functional-737000 kubelet[6533]: I0915 18:15:19.172360    6533 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-lhcr6" podStartSLOduration=1.265974518 podStartE2EDuration="9.172342472s" podCreationTimestamp="2024-09-15 18:15:10 +0000 UTC" firstStartedPulling="2024-09-15 18:15:11.080419407 +0000 UTC m=+69.132366964" lastFinishedPulling="2024-09-15 18:15:18.986787403 +0000 UTC m=+77.038734918" observedRunningTime="2024-09-15 18:15:19.172327635 +0000 UTC m=+77.224275192" watchObservedRunningTime="2024-09-15 18:15:19.172342472 +0000 UTC m=+77.224290028"
	
	
	==> kubernetes-dashboard [39d46abbe138] <==
	2024/09/15 18:15:17 Starting overwatch
	2024/09/15 18:15:17 Using namespace: kubernetes-dashboard
	2024/09/15 18:15:17 Using in-cluster config to connect to apiserver
	2024/09/15 18:15:17 Using secret token for csrf signing
	2024/09/15 18:15:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/15 18:15:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/15 18:15:17 Successful initial request to the apiserver, version: v1.31.1
	2024/09/15 18:15:17 Generating JWE encryption key
	2024/09/15 18:15:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/15 18:15:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/15 18:15:17 Initializing JWE encryption key from synchronized object
	2024/09/15 18:15:17 Creating in-cluster Sidecar client
	2024/09/15 18:15:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/15 18:15:17 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [052d0a74d6d8] <==
	I0915 18:14:06.514692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0915 18:14:06.515835       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [46b7fba1f8cb] <==
	I0915 18:14:18.095820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 18:14:18.099400       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 18:14:18.099445       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 18:14:35.517301       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 18:14:35.518250       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9ae538a5-60fc-4128-9af5-fca0db98a5de", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-737000_6c824b49-10ef-48d8-afc9-d7fedc07d386 became leader
	I0915 18:14:35.518481       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-737000_6c824b49-10ef-48d8-afc9-d7fedc07d386!
	I0915 18:14:35.618907       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-737000_6c824b49-10ef-48d8-afc9-d7fedc07d386!
	I0915 18:14:36.070701       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0915 18:14:36.071877       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"99f16279-228d-42cc-b737-fcb5889a7a71", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0915 18:14:36.070731       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    50a047e6-f0af-4b47-a909-1bf035930eb2 336 0 2024-09-15 18:12:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-15 18:12:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-99f16279-228d-42cc-b737-fcb5889a7a71 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  99f16279-228d-42cc-b737-fcb5889a7a71 686 0 2024-09-15 18:14:36 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-15 18:14:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-15 18:14:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0915 18:14:36.073165       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-99f16279-228d-42cc-b737-fcb5889a7a71" provisioned
	I0915 18:14:36.073181       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0915 18:14:36.073209       1 volume_store.go:212] Trying to save persistentvolume "pvc-99f16279-228d-42cc-b737-fcb5889a7a71"
	I0915 18:14:36.076920       1 volume_store.go:219] persistentvolume "pvc-99f16279-228d-42cc-b737-fcb5889a7a71" saved
	I0915 18:14:36.079483       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"99f16279-228d-42cc-b737-fcb5889a7a71", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-99f16279-228d-42cc-b737-fcb5889a7a71
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-737000 -n functional-737000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-737000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-737000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-737000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-737000/192.168.105.4
	Start Time:       Sun, 15 Sep 2024 11:15:03 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://716018e3bd10c779f9f0c9de66324f82ffa9dd988c35e98c0e5d08902027d543
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 15 Sep 2024 11:15:05 -0700
	      Finished:     Sun, 15 Sep 2024 11:15:05 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d5xjv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-d5xjv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  18s   default-scheduler  Successfully assigned default/busybox-mount to functional-737000
	  Normal  Pulling    18s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     16s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.387s (1.387s including waiting). Image size: 3547125 bytes.
	  Normal  Created    16s   kubelet            Created container mount-munger
	  Normal  Started    16s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (40.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 node stop m02 -v=7 --alsologtostderr
E0915 11:19:31.081049    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:32.364476    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:34.926048    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:40.049473    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:40.847317    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-748000 node stop m02 -v=7 --alsologtostderr: (12.184758041s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
E0915 11:19:50.292865    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:20:10.775967    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:20:51.738569    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:22:13.660173    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr: exit status 7 (2m55.977505833s)

                                                
                                                
-- stdout --
	ha-748000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-748000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-748000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:19:42.984926    3875 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:19:42.985071    3875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:19:42.985075    3875 out.go:358] Setting ErrFile to fd 2...
	I0915 11:19:42.985078    3875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:19:42.985233    3875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:19:42.985357    3875 out.go:352] Setting JSON to false
	I0915 11:19:42.985373    3875 mustload.go:65] Loading cluster: ha-748000
	I0915 11:19:42.985418    3875 notify.go:220] Checking for updates...
	I0915 11:19:42.985619    3875 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:19:42.985626    3875 status.go:255] checking status of ha-748000 ...
	I0915 11:19:42.986440    3875 status.go:330] ha-748000 host status = "Running" (err=<nil>)
	I0915 11:19:42.986449    3875 host.go:66] Checking if "ha-748000" exists ...
	I0915 11:19:42.986560    3875 host.go:66] Checking if "ha-748000" exists ...
	I0915 11:19:42.986684    3875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:19:42.986692    3875 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/id_rsa Username:docker}
	W0915 11:20:08.909104    3875 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0915 11:20:08.909238    3875 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0915 11:20:08.909258    3875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0915 11:20:08.909269    3875 status.go:257] ha-748000 status: &{Name:ha-748000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:20:08.909290    3875 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0915 11:20:08.909301    3875 status.go:255] checking status of ha-748000-m02 ...
	I0915 11:20:08.909760    3875 status.go:330] ha-748000-m02 host status = "Stopped" (err=<nil>)
	I0915 11:20:08.909770    3875 status.go:343] host is not running, skipping remaining checks
	I0915 11:20:08.909775    3875 status.go:257] ha-748000-m02 status: &{Name:ha-748000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:20:08.909785    3875 status.go:255] checking status of ha-748000-m03 ...
	I0915 11:20:08.910968    3875 status.go:330] ha-748000-m03 host status = "Running" (err=<nil>)
	I0915 11:20:08.910980    3875 host.go:66] Checking if "ha-748000-m03" exists ...
	I0915 11:20:08.911212    3875 host.go:66] Checking if "ha-748000-m03" exists ...
	I0915 11:20:08.911520    3875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:20:08.911536    3875 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m03/id_rsa Username:docker}
	W0915 11:21:23.911692    3875 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0915 11:21:23.911760    3875 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0915 11:21:23.911775    3875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0915 11:21:23.911779    3875 status.go:257] ha-748000-m03 status: &{Name:ha-748000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:21:23.911789    3875 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0915 11:21:23.911794    3875 status.go:255] checking status of ha-748000-m04 ...
	I0915 11:21:23.912644    3875 status.go:330] ha-748000-m04 host status = "Running" (err=<nil>)
	I0915 11:21:23.912651    3875 host.go:66] Checking if "ha-748000-m04" exists ...
	I0915 11:21:23.912747    3875 host.go:66] Checking if "ha-748000-m04" exists ...
	I0915 11:21:23.912875    3875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:21:23.912880    3875 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m04/id_rsa Username:docker}
	W0915 11:22:38.912803    3875 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0915 11:22:38.912874    3875 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0915 11:22:38.912886    3875 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0915 11:22:38.912890    3875 status.go:257] ha-748000-m04 status: &{Name:ha-748000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:22:38.912900    3875 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-748000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 3 (25.957405708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 11:23:04.870165    3931 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0915 11:23:04.870178    3931 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0915 11:24:13.113321    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m16.798577542s)
ha_test.go:413: expected profile "ha-748000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-748000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-748000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-748000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
E0915 11:24:29.773542    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 3 (25.957777834s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 11:24:47.623445    3966 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0915 11:24:47.623465    3966 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (102.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.106139958s)

                                                
                                                
-- stdout --
	* Starting "ha-748000-m02" control-plane node in "ha-748000" cluster
	* Restarting existing qemu2 VM for "ha-748000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-748000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:24:47.679178    3971 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:24:47.679492    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:24:47.679496    3971 out.go:358] Setting ErrFile to fd 2...
	I0915 11:24:47.679500    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:24:47.679646    3971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:24:47.679948    3971 mustload.go:65] Loading cluster: ha-748000
	I0915 11:24:47.680224    3971 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0915 11:24:47.680506    3971 host.go:58] "ha-748000-m02" host status: Stopped
	I0915 11:24:47.685002    3971 out.go:177] * Starting "ha-748000-m02" control-plane node in "ha-748000" cluster
	I0915 11:24:47.689042    3971 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:24:47.689057    3971 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:24:47.689064    3971 cache.go:56] Caching tarball of preloaded images
	I0915 11:24:47.689141    3971 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:24:47.689147    3971 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:24:47.689214    3971 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/ha-748000/config.json ...
	I0915 11:24:47.689940    3971 start.go:360] acquireMachinesLock for ha-748000-m02: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:24:47.690016    3971 start.go:364] duration metric: took 36.834µs to acquireMachinesLock for "ha-748000-m02"
	I0915 11:24:47.690027    3971 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:24:47.690030    3971 fix.go:54] fixHost starting: m02
	I0915 11:24:47.690158    3971 fix.go:112] recreateIfNeeded on ha-748000-m02: state=Stopped err=<nil>
	W0915 11:24:47.690165    3971 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:24:47.693964    3971 out.go:177] * Restarting existing qemu2 VM for "ha-748000-m02" ...
	I0915 11:24:47.697991    3971 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:24:47.698060    3971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:80:a2:77:2e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/disk.qcow2
	I0915 11:24:47.700977    3971 main.go:141] libmachine: STDOUT: 
	I0915 11:24:47.700999    3971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:24:47.701030    3971 fix.go:56] duration metric: took 10.997ms for fixHost
	I0915 11:24:47.701040    3971 start.go:83] releasing machines lock for "ha-748000-m02", held for 11.013459ms
	W0915 11:24:47.701047    3971 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:24:47.701077    3971 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:24:47.701082    3971 start.go:729] Will try again in 5 seconds ...
	I0915 11:24:52.703009    3971 start.go:360] acquireMachinesLock for ha-748000-m02: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:24:52.703121    3971 start.go:364] duration metric: took 89.75µs to acquireMachinesLock for "ha-748000-m02"
	I0915 11:24:52.703176    3971 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:24:52.703180    3971 fix.go:54] fixHost starting: m02
	I0915 11:24:52.703346    3971 fix.go:112] recreateIfNeeded on ha-748000-m02: state=Stopped err=<nil>
	W0915 11:24:52.703351    3971 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:24:52.707432    3971 out.go:177] * Restarting existing qemu2 VM for "ha-748000-m02" ...
	I0915 11:24:52.711283    3971 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:24:52.711318    3971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:80:a2:77:2e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/disk.qcow2
	I0915 11:24:52.713448    3971 main.go:141] libmachine: STDOUT: 
	I0915 11:24:52.713464    3971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:24:52.713483    3971 fix.go:56] duration metric: took 10.303084ms for fixHost
	I0915 11:24:52.713486    3971 start.go:83] releasing machines lock for "ha-748000-m02", held for 10.357542ms
	W0915 11:24:52.713527    3971 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:24:52.717329    3971 out.go:201] 
	W0915 11:24:52.721171    3971 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:24:52.721176    3971 out.go:270] * 
	* 
	W0915 11:24:52.722954    3971 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:24:52.727288    3971 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0915 11:24:47.679178    3971 out.go:345] Setting OutFile to fd 1 ...
I0915 11:24:47.679492    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:24:47.679496    3971 out.go:358] Setting ErrFile to fd 2...
I0915 11:24:47.679500    3971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:24:47.679646    3971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:24:47.679948    3971 mustload.go:65] Loading cluster: ha-748000
I0915 11:24:47.680224    3971 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0915 11:24:47.680506    3971 host.go:58] "ha-748000-m02" host status: Stopped
I0915 11:24:47.685002    3971 out.go:177] * Starting "ha-748000-m02" control-plane node in "ha-748000" cluster
I0915 11:24:47.689042    3971 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0915 11:24:47.689057    3971 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0915 11:24:47.689064    3971 cache.go:56] Caching tarball of preloaded images
I0915 11:24:47.689141    3971 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0915 11:24:47.689147    3971 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0915 11:24:47.689214    3971 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/ha-748000/config.json ...
I0915 11:24:47.689940    3971 start.go:360] acquireMachinesLock for ha-748000-m02: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0915 11:24:47.690016    3971 start.go:364] duration metric: took 36.834µs to acquireMachinesLock for "ha-748000-m02"
I0915 11:24:47.690027    3971 start.go:96] Skipping create...Using existing machine configuration
I0915 11:24:47.690030    3971 fix.go:54] fixHost starting: m02
I0915 11:24:47.690158    3971 fix.go:112] recreateIfNeeded on ha-748000-m02: state=Stopped err=<nil>
W0915 11:24:47.690165    3971 fix.go:138] unexpected machine state, will restart: <nil>
I0915 11:24:47.693964    3971 out.go:177] * Restarting existing qemu2 VM for "ha-748000-m02" ...
I0915 11:24:47.697991    3971 qemu.go:418] Using hvf for hardware acceleration
I0915 11:24:47.698060    3971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:80:a2:77:2e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/disk.qcow2
I0915 11:24:47.700977    3971 main.go:141] libmachine: STDOUT: 
I0915 11:24:47.700999    3971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0915 11:24:47.701030    3971 fix.go:56] duration metric: took 10.997ms for fixHost
I0915 11:24:47.701040    3971 start.go:83] releasing machines lock for "ha-748000-m02", held for 11.013459ms
W0915 11:24:47.701047    3971 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0915 11:24:47.701077    3971 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0915 11:24:47.701082    3971 start.go:729] Will try again in 5 seconds ...
I0915 11:24:52.703009    3971 start.go:360] acquireMachinesLock for ha-748000-m02: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0915 11:24:52.703121    3971 start.go:364] duration metric: took 89.75µs to acquireMachinesLock for "ha-748000-m02"
I0915 11:24:52.703176    3971 start.go:96] Skipping create...Using existing machine configuration
I0915 11:24:52.703180    3971 fix.go:54] fixHost starting: m02
I0915 11:24:52.703346    3971 fix.go:112] recreateIfNeeded on ha-748000-m02: state=Stopped err=<nil>
W0915 11:24:52.703351    3971 fix.go:138] unexpected machine state, will restart: <nil>
I0915 11:24:52.707432    3971 out.go:177] * Restarting existing qemu2 VM for "ha-748000-m02" ...
I0915 11:24:52.711283    3971 qemu.go:418] Using hvf for hardware acceleration
I0915 11:24:52.711318    3971 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:80:a2:77:2e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m02/disk.qcow2
I0915 11:24:52.713448    3971 main.go:141] libmachine: STDOUT: 
I0915 11:24:52.713464    3971 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0915 11:24:52.713483    3971 fix.go:56] duration metric: took 10.303084ms for fixHost
I0915 11:24:52.713486    3971 start.go:83] releasing machines lock for "ha-748000-m02", held for 10.357542ms
W0915 11:24:52.713527    3971 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0915 11:24:52.717329    3971 out.go:201] 
W0915 11:24:52.721171    3971 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0915 11:24:52.721176    3971 out.go:270] * 
* 
W0915 11:24:52.722954    3971 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0915 11:24:52.727288    3971 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-748000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
E0915 11:24:57.500000    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr: exit status 7 (2m57.6659075s)

                                                
                                                
-- stdout --
	ha-748000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-748000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-748000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:24:52.763586    3975 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:24:52.763755    3975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:24:52.763759    3975 out.go:358] Setting ErrFile to fd 2...
	I0915 11:24:52.763762    3975 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:24:52.763894    3975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:24:52.764017    3975 out.go:352] Setting JSON to false
	I0915 11:24:52.764028    3975 mustload.go:65] Loading cluster: ha-748000
	I0915 11:24:52.764073    3975 notify.go:220] Checking for updates...
	I0915 11:24:52.764267    3975 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:24:52.764274    3975 status.go:255] checking status of ha-748000 ...
	I0915 11:24:52.765029    3975 status.go:330] ha-748000 host status = "Running" (err=<nil>)
	I0915 11:24:52.765037    3975 host.go:66] Checking if "ha-748000" exists ...
	I0915 11:24:52.765146    3975 host.go:66] Checking if "ha-748000" exists ...
	I0915 11:24:52.765263    3975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:24:52.765270    3975 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/id_rsa Username:docker}
	W0915 11:24:52.765451    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0915 11:24:52.765466    3975 retry.go:31] will retry after 144.587126ms: dial tcp 192.168.105.5:22: connect: host is down
	W0915 11:24:52.912181    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0915 11:24:52.912203    3975 retry.go:31] will retry after 399.623304ms: dial tcp 192.168.105.5:22: connect: host is down
	W0915 11:24:53.313974    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0915 11:24:53.313995    3975 retry.go:31] will retry after 792.896474ms: dial tcp 192.168.105.5:22: connect: host is down
	W0915 11:24:54.109032    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0915 11:24:54.109088    3975 retry.go:31] will retry after 362.831073ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0915 11:24:54.473796    3975 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/id_rsa Username:docker}
	W0915 11:25:20.391557    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0915 11:25:20.391601    3975 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0915 11:25:20.391608    3975 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0915 11:25:20.391611    3975 status.go:257] ha-748000 status: &{Name:ha-748000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:25:20.391621    3975 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0915 11:25:20.391624    3975 status.go:255] checking status of ha-748000-m02 ...
	I0915 11:25:20.391836    3975 status.go:330] ha-748000-m02 host status = "Stopped" (err=<nil>)
	I0915 11:25:20.391842    3975 status.go:343] host is not running, skipping remaining checks
	I0915 11:25:20.391844    3975 status.go:257] ha-748000-m02 status: &{Name:ha-748000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:25:20.391849    3975 status.go:255] checking status of ha-748000-m03 ...
	I0915 11:25:20.392437    3975 status.go:330] ha-748000-m03 host status = "Running" (err=<nil>)
	I0915 11:25:20.392444    3975 host.go:66] Checking if "ha-748000-m03" exists ...
	I0915 11:25:20.392563    3975 host.go:66] Checking if "ha-748000-m03" exists ...
	I0915 11:25:20.392692    3975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:25:20.392699    3975 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m03/id_rsa Username:docker}
	W0915 11:26:35.393409    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0915 11:26:35.393452    3975 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0915 11:26:35.393459    3975 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0915 11:26:35.393463    3975 status.go:257] ha-748000-m03 status: &{Name:ha-748000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:26:35.393471    3975 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0915 11:26:35.393475    3975 status.go:255] checking status of ha-748000-m04 ...
	I0915 11:26:35.394141    3975 status.go:330] ha-748000-m04 host status = "Running" (err=<nil>)
	I0915 11:26:35.394150    3975 host.go:66] Checking if "ha-748000-m04" exists ...
	I0915 11:26:35.394239    3975 host.go:66] Checking if "ha-748000-m04" exists ...
	I0915 11:26:35.394359    3975 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 11:26:35.394368    3975 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000-m04/id_rsa Username:docker}
	W0915 11:27:50.393116    3975 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0915 11:27:50.393164    3975 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0915 11:27:50.393172    3975 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0915 11:27:50.393175    3975 status.go:257] ha-748000-m04 status: &{Name:ha-748000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0915 11:27:50.393184    3975 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 3 (25.955023708s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 11:28:16.347685    4002 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0915 11:28:16.347699    4002 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-748000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-748000 -v=7 --alsologtostderr
E0915 11:30:36.194753    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-748000 -v=7 --alsologtostderr: (3m49.018110209s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-748000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-748000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226568s)

                                                
                                                
-- stdout --
	* [ha-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-748000" primary control-plane node in "ha-748000" cluster
	* Restarting existing qemu2 VM for "ha-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:33:23.516202    4411 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:33:23.516425    4411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:23.516429    4411 out.go:358] Setting ErrFile to fd 2...
	I0915 11:33:23.516433    4411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:23.516594    4411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:33:23.517794    4411 out.go:352] Setting JSON to false
	I0915 11:33:23.536962    4411 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3766,"bootTime":1726421437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:33:23.537043    4411 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:33:23.541636    4411 out.go:177] * [ha-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:33:23.549507    4411 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:33:23.549541    4411 notify.go:220] Checking for updates...
	I0915 11:33:23.557701    4411 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:33:23.560660    4411 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:33:23.563676    4411 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:33:23.566714    4411 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:33:23.569628    4411 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:33:23.572995    4411 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:33:23.573051    4411 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:33:23.577702    4411 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:33:23.584660    4411 start.go:297] selected driver: qemu2
	I0915 11:33:23.584665    4411 start.go:901] validating driver "qemu2" against &{Name:ha-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-748000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:33:23.584748    4411 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:33:23.587620    4411 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:33:23.587647    4411 cni.go:84] Creating CNI manager for ""
	I0915 11:33:23.587676    4411 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 11:33:23.587725    4411 start.go:340] cluster config:
	{Name:ha-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:33:23.592041    4411 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:33:23.600657    4411 out.go:177] * Starting "ha-748000" primary control-plane node in "ha-748000" cluster
	I0915 11:33:23.604701    4411 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:33:23.604719    4411 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:33:23.604729    4411 cache.go:56] Caching tarball of preloaded images
	I0915 11:33:23.604786    4411 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:33:23.604791    4411 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:33:23.604856    4411 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/ha-748000/config.json ...
	I0915 11:33:23.605286    4411 start.go:360] acquireMachinesLock for ha-748000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:33:23.605321    4411 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "ha-748000"
	I0915 11:33:23.605329    4411 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:33:23.605333    4411 fix.go:54] fixHost starting: 
	I0915 11:33:23.605445    4411 fix.go:112] recreateIfNeeded on ha-748000: state=Stopped err=<nil>
	W0915 11:33:23.605453    4411 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:33:23.609727    4411 out.go:177] * Restarting existing qemu2 VM for "ha-748000" ...
	I0915 11:33:23.617581    4411 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:33:23.617631    4411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:5e:8b:a9:4e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/disk.qcow2
	I0915 11:33:23.619547    4411 main.go:141] libmachine: STDOUT: 
	I0915 11:33:23.619565    4411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:33:23.619597    4411 fix.go:56] duration metric: took 14.252125ms for fixHost
	I0915 11:33:23.619601    4411 start.go:83] releasing machines lock for "ha-748000", held for 14.266334ms
	W0915 11:33:23.619607    4411 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:33:23.619639    4411 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:33:23.619644    4411 start.go:729] Will try again in 5 seconds ...
	I0915 11:33:28.624786    4411 start.go:360] acquireMachinesLock for ha-748000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:33:28.625230    4411 start.go:364] duration metric: took 337.959µs to acquireMachinesLock for "ha-748000"
	I0915 11:33:28.625385    4411 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:33:28.625406    4411 fix.go:54] fixHost starting: 
	I0915 11:33:28.626156    4411 fix.go:112] recreateIfNeeded on ha-748000: state=Stopped err=<nil>
	W0915 11:33:28.626184    4411 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:33:28.633711    4411 out.go:177] * Restarting existing qemu2 VM for "ha-748000" ...
	I0915 11:33:28.636792    4411 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:33:28.637035    4411 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:5e:8b:a9:4e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/disk.qcow2
	I0915 11:33:28.646358    4411 main.go:141] libmachine: STDOUT: 
	I0915 11:33:28.646426    4411 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:33:28.646533    4411 fix.go:56] duration metric: took 21.116208ms for fixHost
	I0915 11:33:28.646554    4411 start.go:83] releasing machines lock for "ha-748000", held for 21.287542ms
	W0915 11:33:28.646762    4411 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:33:28.655687    4411 out.go:201] 
	W0915 11:33:28.659831    4411 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:33:28.659868    4411 out.go:270] * 
	* 
	W0915 11:33:28.662507    4411 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:33:28.669696    4411 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-748000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-748000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (33.650375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.75075ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-748000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-748000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:33:28.815818    4426 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:33:28.816106    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:28.816109    4426 out.go:358] Setting ErrFile to fd 2...
	I0915 11:33:28.816111    4426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:28.816253    4426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:33:28.816493    4426 mustload.go:65] Loading cluster: ha-748000
	I0915 11:33:28.816750    4426 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0915 11:33:28.817069    4426 out.go:270] ! The control-plane node ha-748000 host is not running (will try others): state=Stopped
	! The control-plane node ha-748000 host is not running (will try others): state=Stopped
	W0915 11:33:28.817186    4426 out.go:270] ! The control-plane node ha-748000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-748000-m02 host is not running (will try others): state=Stopped
	I0915 11:33:28.821504    4426 out.go:177] * The control-plane node ha-748000-m03 host is not running: state=Stopped
	I0915 11:33:28.824436    4426 out.go:177]   To start a cluster, run: "minikube start -p ha-748000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-748000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr: exit status 7 (30.484917ms)

                                                
                                                
-- stdout --
	ha-748000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:33:28.856896    4428 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:33:28.857025    4428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:28.857028    4428 out.go:358] Setting ErrFile to fd 2...
	I0915 11:33:28.857031    4428 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:33:28.857146    4428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:33:28.857263    4428 out.go:352] Setting JSON to false
	I0915 11:33:28.857273    4428 mustload.go:65] Loading cluster: ha-748000
	I0915 11:33:28.857326    4428 notify.go:220] Checking for updates...
	I0915 11:33:28.857501    4428 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:33:28.857510    4428 status.go:255] checking status of ha-748000 ...
	I0915 11:33:28.857737    4428 status.go:330] ha-748000 host status = "Stopped" (err=<nil>)
	I0915 11:33:28.857740    4428 status.go:343] host is not running, skipping remaining checks
	I0915 11:33:28.857742    4428 status.go:257] ha-748000 status: &{Name:ha-748000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:33:28.857753    4428 status.go:255] checking status of ha-748000-m02 ...
	I0915 11:33:28.857840    4428 status.go:330] ha-748000-m02 host status = "Stopped" (err=<nil>)
	I0915 11:33:28.857843    4428 status.go:343] host is not running, skipping remaining checks
	I0915 11:33:28.857844    4428 status.go:257] ha-748000-m02 status: &{Name:ha-748000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:33:28.857848    4428 status.go:255] checking status of ha-748000-m03 ...
	I0915 11:33:28.857933    4428 status.go:330] ha-748000-m03 host status = "Stopped" (err=<nil>)
	I0915 11:33:28.857935    4428 status.go:343] host is not running, skipping remaining checks
	I0915 11:33:28.857936    4428 status.go:257] ha-748000-m03 status: &{Name:ha-748000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:33:28.857940    4428 status.go:255] checking status of ha-748000-m04 ...
	I0915 11:33:28.858031    4428 status.go:330] ha-748000-m04 host status = "Stopped" (err=<nil>)
	I0915 11:33:28.858034    4428 status.go:343] host is not running, skipping remaining checks
	I0915 11:33:28.858036    4428 status.go:257] ha-748000-m04 status: &{Name:ha-748000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (30.66425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-748000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-748000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-748000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-748000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (30.512083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 stop -v=7 --alsologtostderr
E0915 11:34:13.143465    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:34:29.804004    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:35:52.893768    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-748000 stop -v=7 --alsologtostderr: (3m21.970709458s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr: exit status 7 (65.991791ms)

                                                
                                                
-- stdout --
	ha-748000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-748000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:36:51.009739    4469 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:36:51.009954    4469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:51.009958    4469 out.go:358] Setting ErrFile to fd 2...
	I0915 11:36:51.009961    4469 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:51.010117    4469 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:36:51.010279    4469 out.go:352] Setting JSON to false
	I0915 11:36:51.010290    4469 mustload.go:65] Loading cluster: ha-748000
	I0915 11:36:51.010319    4469 notify.go:220] Checking for updates...
	I0915 11:36:51.010598    4469 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:36:51.010609    4469 status.go:255] checking status of ha-748000 ...
	I0915 11:36:51.010913    4469 status.go:330] ha-748000 host status = "Stopped" (err=<nil>)
	I0915 11:36:51.010918    4469 status.go:343] host is not running, skipping remaining checks
	I0915 11:36:51.010921    4469 status.go:257] ha-748000 status: &{Name:ha-748000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:36:51.010934    4469 status.go:255] checking status of ha-748000-m02 ...
	I0915 11:36:51.011057    4469 status.go:330] ha-748000-m02 host status = "Stopped" (err=<nil>)
	I0915 11:36:51.011062    4469 status.go:343] host is not running, skipping remaining checks
	I0915 11:36:51.011065    4469 status.go:257] ha-748000-m02 status: &{Name:ha-748000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:36:51.011071    4469 status.go:255] checking status of ha-748000-m03 ...
	I0915 11:36:51.011210    4469 status.go:330] ha-748000-m03 host status = "Stopped" (err=<nil>)
	I0915 11:36:51.011214    4469 status.go:343] host is not running, skipping remaining checks
	I0915 11:36:51.011216    4469 status.go:257] ha-748000-m03 status: &{Name:ha-748000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 11:36:51.011221    4469 status.go:255] checking status of ha-748000-m04 ...
	I0915 11:36:51.011350    4469 status.go:330] ha-748000-m04 host status = "Stopped" (err=<nil>)
	I0915 11:36:51.011355    4469 status.go:343] host is not running, skipping remaining checks
	I0915 11:36:51.011357    4469 status.go:257] ha-748000-m04 status: &{Name:ha-748000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr": ha-748000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-748000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (32.892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-748000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-748000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.182248s)

                                                
                                                
-- stdout --
	* [ha-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-748000" primary control-plane node in "ha-748000" cluster
	* Restarting existing qemu2 VM for "ha-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-748000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:36:51.074018    4473 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:36:51.074156    4473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:51.074160    4473 out.go:358] Setting ErrFile to fd 2...
	I0915 11:36:51.074163    4473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:51.074295    4473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:36:51.075277    4473 out.go:352] Setting JSON to false
	I0915 11:36:51.091509    4473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3974,"bootTime":1726421437,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:36:51.091577    4473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:36:51.096390    4473 out.go:177] * [ha-748000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:36:51.103367    4473 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:36:51.103412    4473 notify.go:220] Checking for updates...
	I0915 11:36:51.110331    4473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:36:51.113170    4473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:36:51.116289    4473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:36:51.119325    4473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:36:51.122377    4473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:36:51.125612    4473 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:36:51.125889    4473 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:36:51.130324    4473 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:36:51.137479    4473 start.go:297] selected driver: qemu2
	I0915 11:36:51.137486    4473 start.go:901] validating driver "qemu2" against &{Name:ha-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-748000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:36:51.137569    4473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:36:51.139854    4473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:36:51.139875    4473 cni.go:84] Creating CNI manager for ""
	I0915 11:36:51.139895    4473 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0915 11:36:51.139970    4473 start.go:340] cluster config:
	{Name:ha-748000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:36:51.143626    4473 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:36:51.152310    4473 out.go:177] * Starting "ha-748000" primary control-plane node in "ha-748000" cluster
	I0915 11:36:51.156304    4473 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:36:51.156319    4473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:36:51.156330    4473 cache.go:56] Caching tarball of preloaded images
	I0915 11:36:51.156394    4473 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:36:51.156400    4473 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:36:51.156477    4473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/ha-748000/config.json ...
	I0915 11:36:51.156908    4473 start.go:360] acquireMachinesLock for ha-748000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:36:51.156943    4473 start.go:364] duration metric: took 29µs to acquireMachinesLock for "ha-748000"
	I0915 11:36:51.156952    4473 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:36:51.156956    4473 fix.go:54] fixHost starting: 
	I0915 11:36:51.157069    4473 fix.go:112] recreateIfNeeded on ha-748000: state=Stopped err=<nil>
	W0915 11:36:51.157077    4473 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:36:51.161240    4473 out.go:177] * Restarting existing qemu2 VM for "ha-748000" ...
	I0915 11:36:51.169316    4473 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:36:51.169350    4473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:5e:8b:a9:4e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/disk.qcow2
	I0915 11:36:51.171329    4473 main.go:141] libmachine: STDOUT: 
	I0915 11:36:51.171345    4473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:36:51.171377    4473 fix.go:56] duration metric: took 14.419292ms for fixHost
	I0915 11:36:51.171382    4473 start.go:83] releasing machines lock for "ha-748000", held for 14.434583ms
	W0915 11:36:51.171387    4473 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:36:51.171425    4473 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:36:51.171429    4473 start.go:729] Will try again in 5 seconds ...
	I0915 11:36:56.173583    4473 start.go:360] acquireMachinesLock for ha-748000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:36:56.174009    4473 start.go:364] duration metric: took 338.541µs to acquireMachinesLock for "ha-748000"
	I0915 11:36:56.174156    4473 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:36:56.174175    4473 fix.go:54] fixHost starting: 
	I0915 11:36:56.174923    4473 fix.go:112] recreateIfNeeded on ha-748000: state=Stopped err=<nil>
	W0915 11:36:56.174950    4473 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:36:56.179428    4473 out.go:177] * Restarting existing qemu2 VM for "ha-748000" ...
	I0915 11:36:56.182377    4473 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:36:56.182585    4473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:5e:8b:a9:4e:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/ha-748000/disk.qcow2
	I0915 11:36:56.191729    4473 main.go:141] libmachine: STDOUT: 
	I0915 11:36:56.191808    4473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:36:56.191878    4473 fix.go:56] duration metric: took 17.70525ms for fixHost
	I0915 11:36:56.191904    4473 start.go:83] releasing machines lock for "ha-748000", held for 17.874125ms
	W0915 11:36:56.192112    4473 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-748000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:36:56.199363    4473 out.go:201] 
	W0915 11:36:56.203418    4473 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:36:56.203450    4473 out.go:270] * 
	* 
	W0915 11:36:56.206280    4473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:36:56.212327    4473 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-748000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (69.738416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-748000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-748000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-748000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-748000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (30.313875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-748000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-748000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.259375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-748000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-748000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:36:56.409498    4488 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:36:56.409644    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:56.409648    4488 out.go:358] Setting ErrFile to fd 2...
	I0915 11:36:56.409650    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:36:56.409760    4488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:36:56.409976    4488 mustload.go:65] Loading cluster: ha-748000
	I0915 11:36:56.410216    4488 config.go:182] Loaded profile config "ha-748000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0915 11:36:56.410521    4488 out.go:270] ! The control-plane node ha-748000 host is not running (will try others): state=Stopped
	! The control-plane node ha-748000 host is not running (will try others): state=Stopped
	W0915 11:36:56.410620    4488 out.go:270] ! The control-plane node ha-748000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-748000-m02 host is not running (will try others): state=Stopped
	I0915 11:36:56.415250    4488 out.go:177] * The control-plane node ha-748000-m03 host is not running: state=Stopped
	I0915 11:36:56.419188    4488 out.go:177]   To start a cluster, run: "minikube start -p ha-748000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-748000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-748000 -n ha-748000: exit status 7 (29.967167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-748000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-042000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-042000 --driver=qemu2 : exit status 80 (9.896865417s)

                                                
                                                
-- stdout --
	* [image-042000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-042000" primary control-plane node in "image-042000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-042000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-042000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-042000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-042000 -n image-042000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-042000 -n image-042000: exit status 7 (69.885833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-042000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-309000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-309000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.9364425s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f671046f-988b-420c-99eb-ad5f914bcd42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-309000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8cabf11-4c44-46c4-9604-b3dcdb5a696b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"5b1b3fb0-750b-42b2-acb8-3bd38f0d9b4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig"}}
	{"specversion":"1.0","id":"e9822e47-59e0-405c-85a4-97155041dece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"5a66da65-42d8-4d1f-a2f8-5b0a8eb86866","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ee24f419-6b9c-46e0-999c-94573b59fa2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube"}}
	{"specversion":"1.0","id":"1d4219fe-3d8d-4240-975c-bbf7a110ea45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7d7c2da2-9cfd-437d-846f-fa4029942555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cc90388-6bff-4c33-bae2-29630569d883","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"96444d98-81a2-43a0-b1c0-32827f5ebdfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-309000\" primary control-plane node in \"json-output-309000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfe0bda7-95d8-471c-aaf5-99b827414d39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"94ff896b-7a76-45ab-9f87-cf0ab13246ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-309000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"490f2f49-4719-45cd-ac7f-c5086bc1405d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3f9cf056-8f3f-4c7d-ac1c-9100282ff223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cf69360c-8c43-42f2-869c-9d06dc957273","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-309000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"fe6a090b-ca62-4d3e-810e-2fcdb49479cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"b6790a39-6198-4db8-9afd-b7b6633ffb9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-309000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.94s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-309000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-309000 --output=json --user=testUser: exit status 83 (79.311042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d369e055-26ff-4ad7-93b7-b7aba8f9012e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-309000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3f272e3d-ee8a-44c9-9202-0feea2413474","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-309000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-309000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-309000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-309000 --output=json --user=testUser: exit status 83 (45.174ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-309000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-309000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-309000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-309000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-197000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-197000 --driver=qemu2 : exit status 80 (9.843462542s)

                                                
                                                
-- stdout --
	* [first-197000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-197000" primary control-plane node in "first-197000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-197000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-197000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-197000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-15 11:37:30.609186 -0700 PDT m=+2516.455215626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-199000 -n second-199000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-199000 -n second-199000: exit status 85 (80.704375ms)

                                                
                                                
-- stdout --
	* Profile "second-199000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-199000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-199000" host is not running, skipping log retrieval (state="* Profile \"second-199000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-199000\"")
helpers_test.go:175: Cleaning up "second-199000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-199000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-15 11:37:30.795001 -0700 PDT m=+2516.641032126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-197000 -n first-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-197000 -n first-197000: exit status 7 (30.313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-197000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-197000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-197000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.879237375s)

                                                
                                                
-- stdout --
	* [mount-start-1-034000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-034000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-034000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-034000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-034000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-034000 -n mount-start-1-034000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-034000 -n mount-start-1-034000: exit status 7 (69.254625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-034000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-715000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-715000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.931113s)

                                                
                                                
-- stdout --
	* [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-715000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:37:41.067252    4632 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:37:41.067377    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:37:41.067380    4632 out.go:358] Setting ErrFile to fd 2...
	I0915 11:37:41.067383    4632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:37:41.067517    4632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:37:41.068653    4632 out.go:352] Setting JSON to false
	I0915 11:37:41.084710    4632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4024,"bootTime":1726421437,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:37:41.084783    4632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:37:41.090807    4632 out.go:177] * [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:37:41.098749    4632 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:37:41.098823    4632 notify.go:220] Checking for updates...
	I0915 11:37:41.106645    4632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:37:41.109704    4632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:37:41.112696    4632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:37:41.115748    4632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:37:41.118705    4632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:37:41.121825    4632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:37:41.125687    4632 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:37:41.132717    4632 start.go:297] selected driver: qemu2
	I0915 11:37:41.132724    4632 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:37:41.132730    4632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:37:41.135158    4632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:37:41.138686    4632 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:37:41.141740    4632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:37:41.141757    4632 cni.go:84] Creating CNI manager for ""
	I0915 11:37:41.141778    4632 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0915 11:37:41.141781    4632 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 11:37:41.141811    4632 start.go:340] cluster config:
	{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:37:41.145597    4632 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:37:41.151651    4632 out.go:177] * Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	I0915 11:37:41.155726    4632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:37:41.155746    4632 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:37:41.155763    4632 cache.go:56] Caching tarball of preloaded images
	I0915 11:37:41.155835    4632 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:37:41.155842    4632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:37:41.156069    4632 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/multinode-715000/config.json ...
	I0915 11:37:41.156082    4632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/multinode-715000/config.json: {Name:mk6c4470c7fd9ee0196975f48f83d90b85936c58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:37:41.156548    4632 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:37:41.156588    4632 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "multinode-715000"
	I0915 11:37:41.156601    4632 start.go:93] Provisioning new machine with config: &{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:37:41.156633    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:37:41.165667    4632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:37:41.184343    4632 start.go:159] libmachine.API.Create for "multinode-715000" (driver="qemu2")
	I0915 11:37:41.184378    4632 client.go:168] LocalClient.Create starting
	I0915 11:37:41.184448    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:37:41.184484    4632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:37:41.184494    4632 main.go:141] libmachine: Parsing certificate...
	I0915 11:37:41.184529    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:37:41.184554    4632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:37:41.184562    4632 main.go:141] libmachine: Parsing certificate...
	I0915 11:37:41.184915    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:37:41.342716    4632 main.go:141] libmachine: Creating SSH key...
	I0915 11:37:41.491924    4632 main.go:141] libmachine: Creating Disk image...
	I0915 11:37:41.491931    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:37:41.492111    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:41.501857    4632 main.go:141] libmachine: STDOUT: 
	I0915 11:37:41.501878    4632 main.go:141] libmachine: STDERR: 
	I0915 11:37:41.501945    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2 +20000M
	I0915 11:37:41.509893    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:37:41.509920    4632 main.go:141] libmachine: STDERR: 
	I0915 11:37:41.509934    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:41.509939    4632 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:37:41.509953    4632 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:37:41.509982    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f1:6d:54:10:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:41.511694    4632 main.go:141] libmachine: STDOUT: 
	I0915 11:37:41.511706    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:37:41.511725    4632 client.go:171] duration metric: took 327.34575ms to LocalClient.Create
	I0915 11:37:43.513913    4632 start.go:128] duration metric: took 2.357274833s to createHost
	I0915 11:37:43.513987    4632 start.go:83] releasing machines lock for "multinode-715000", held for 2.357417958s
	W0915 11:37:43.514064    4632 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:37:43.532411    4632 out.go:177] * Deleting "multinode-715000" in qemu2 ...
	W0915 11:37:43.572924    4632 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:37:43.572952    4632 start.go:729] Will try again in 5 seconds ...
	I0915 11:37:48.575122    4632 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:37:48.575561    4632 start.go:364] duration metric: took 354.291µs to acquireMachinesLock for "multinode-715000"
	I0915 11:37:48.575694    4632 start.go:93] Provisioning new machine with config: &{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:37:48.576095    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:37:48.596809    4632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:37:48.647566    4632 start.go:159] libmachine.API.Create for "multinode-715000" (driver="qemu2")
	I0915 11:37:48.647626    4632 client.go:168] LocalClient.Create starting
	I0915 11:37:48.647743    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:37:48.647812    4632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:37:48.647832    4632 main.go:141] libmachine: Parsing certificate...
	I0915 11:37:48.647889    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:37:48.647936    4632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:37:48.647954    4632 main.go:141] libmachine: Parsing certificate...
	I0915 11:37:48.648649    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:37:48.817970    4632 main.go:141] libmachine: Creating SSH key...
	I0915 11:37:48.899481    4632 main.go:141] libmachine: Creating Disk image...
	I0915 11:37:48.899487    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:37:48.899653    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:48.908942    4632 main.go:141] libmachine: STDOUT: 
	I0915 11:37:48.908956    4632 main.go:141] libmachine: STDERR: 
	I0915 11:37:48.909008    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2 +20000M
	I0915 11:37:48.917106    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:37:48.917122    4632 main.go:141] libmachine: STDERR: 
	I0915 11:37:48.917133    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:48.917141    4632 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:37:48.917163    4632 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:37:48.917196    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0b:d1:ae:ed:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:37:48.918814    4632 main.go:141] libmachine: STDOUT: 
	I0915 11:37:48.918832    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:37:48.918844    4632 client.go:171] duration metric: took 271.216042ms to LocalClient.Create
	I0915 11:37:50.921011    4632 start.go:128] duration metric: took 2.344902458s to createHost
	I0915 11:37:50.921074    4632 start.go:83] releasing machines lock for "multinode-715000", held for 2.345510625s
	W0915 11:37:50.921472    4632 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:37:50.930999    4632 out.go:201] 
	W0915 11:37:50.943153    4632 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:37:50.943213    4632 out.go:270] * 
	* 
	W0915 11:37:50.945933    4632 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:37:50.956006    4632 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-715000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (69.229875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (110.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.587917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-715000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- rollout status deployment/busybox: exit status 1 (59.993375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.228166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.950292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.268625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.131709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.397625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.707625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.920542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.376875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.621167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.976292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0915 11:39:13.140046    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:39:29.800492    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.125917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.46025ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.352ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.253208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.032083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.814209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (110.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-715000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.172042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.599958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-715000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-715000 -v 3 --alsologtostderr: exit status 83 (41.297208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-715000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-715000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:42.067194    4722 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:42.067363    4722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.067366    4722 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:42.067368    4722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.067493    4722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:42.067709    4722 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:42.067917    4722 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:42.072054    4722 out.go:177] * The control-plane node multinode-715000 host is not running: state=Stopped
	I0915 11:39:42.076059    4722 out.go:177]   To start a cluster, run: "minikube start -p multinode-715000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-715000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.42525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-715000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-715000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.768292ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-715000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-715000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-715000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.86725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-715000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-715000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-715000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-715000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.280541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status --output json --alsologtostderr: exit status 7 (30.509458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-715000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:42.277397    4734 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:42.277529    4734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.277532    4734 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:42.277534    4734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.277667    4734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:42.277790    4734 out.go:352] Setting JSON to true
	I0915 11:39:42.277803    4734 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:42.277867    4734 notify.go:220] Checking for updates...
	I0915 11:39:42.278012    4734 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:42.278018    4734 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:42.278255    4734 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:42.278259    4734 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:42.278261    4734 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-715000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.203083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 node stop m03: exit status 85 (46.945166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-715000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status: exit status 7 (30.454125ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr: exit status 7 (30.2735ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:42.416130    4742 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:42.416269    4742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.416273    4742 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:42.416275    4742 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.416392    4742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:42.416507    4742 out.go:352] Setting JSON to false
	I0915 11:39:42.416516    4742 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:42.416584    4742 notify.go:220] Checking for updates...
	I0915 11:39:42.416712    4742 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:42.416719    4742 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:42.416969    4742 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:42.416973    4742 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:42.416975    4742 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr": multinode-715000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.384583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.432834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:42.476385    4746 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:42.476622    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.476625    4746 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:42.476627    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.476757    4746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:42.476970    4746 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:42.477169    4746 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:42.481962    4746 out.go:201] 
	W0915 11:39:42.485072    4746 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0915 11:39:42.485078    4746 out.go:270] * 
	* 
	W0915 11:39:42.486867    4746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:39:42.487998    4746 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0915 11:39:42.476385    4746 out.go:345] Setting OutFile to fd 1 ...
I0915 11:39:42.476622    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:39:42.476625    4746 out.go:358] Setting ErrFile to fd 2...
I0915 11:39:42.476627    4746 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:39:42.476757    4746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:39:42.476970    4746 mustload.go:65] Loading cluster: multinode-715000
I0915 11:39:42.477169    4746 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:39:42.481962    4746 out.go:201] 
W0915 11:39:42.485072    4746 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0915 11:39:42.485078    4746 out.go:270] * 
* 
W0915 11:39:42.486867    4746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0915 11:39:42.487998    4746 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-715000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (30.893459ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:42.522342    4748 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:42.522465    4748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.522468    4748 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:42.522471    4748 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:42.522605    4748 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:42.522724    4748 out.go:352] Setting JSON to false
	I0915 11:39:42.522734    4748 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:42.522801    4748 notify.go:220] Checking for updates...
	I0915 11:39:42.522945    4748 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:42.522952    4748 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:42.523197    4748 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:42.523201    4748 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:42.523203    4748 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (76.206875ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:43.939483    4750 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:43.939691    4750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:43.939697    4750 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:43.939700    4750 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:43.939891    4750 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:43.940055    4750 out.go:352] Setting JSON to false
	I0915 11:39:43.940068    4750 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:43.940117    4750 notify.go:220] Checking for updates...
	I0915 11:39:43.940374    4750 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:43.940383    4750 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:43.940733    4750 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:43.940739    4750 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:43.940742    4750 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (74.074875ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:45.743593    4752 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:45.743768    4752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:45.743772    4752 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:45.743776    4752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:45.743960    4752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:45.744106    4752 out.go:352] Setting JSON to false
	I0915 11:39:45.744119    4752 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:45.744165    4752 notify.go:220] Checking for updates...
	I0915 11:39:45.744397    4752 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:45.744405    4752 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:45.744712    4752 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:45.744717    4752 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:45.744720    4752 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (73.487291ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:48.647268    4757 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:48.647470    4757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:48.647475    4757 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:48.647478    4757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:48.647665    4757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:48.647835    4757 out.go:352] Setting JSON to false
	I0915 11:39:48.647847    4757 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:48.647879    4757 notify.go:220] Checking for updates...
	I0915 11:39:48.648127    4757 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:48.648134    4757 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:48.648451    4757 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:48.648455    4757 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:48.648458    4757 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (73.1485ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:50.738417    4761 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:50.738640    4761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:50.738644    4761 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:50.738647    4761 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:50.738787    4761 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:50.738936    4761 out.go:352] Setting JSON to false
	I0915 11:39:50.738948    4761 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:50.738988    4761 notify.go:220] Checking for updates...
	I0915 11:39:50.739226    4761 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:50.739235    4761 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:50.739539    4761 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:50.739544    4761 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:50.739547    4761 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (75.382083ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:39:55.944089    4763 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:39:55.944275    4763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:55.944279    4763 out.go:358] Setting ErrFile to fd 2...
	I0915 11:39:55.944283    4763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:39:55.944445    4763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:39:55.944595    4763 out.go:352] Setting JSON to false
	I0915 11:39:55.944608    4763 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:39:55.944649    4763 notify.go:220] Checking for updates...
	I0915 11:39:55.944850    4763 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:39:55.944857    4763 status.go:255] checking status of multinode-715000 ...
	I0915 11:39:55.945173    4763 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:39:55.945178    4763 status.go:343] host is not running, skipping remaining checks
	I0915 11:39:55.945181    4763 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (74.109625ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:00.526565    4765 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:00.526760    4765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:00.526764    4765 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:00.526768    4765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:00.526954    4765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:00.527108    4765 out.go:352] Setting JSON to false
	I0915 11:40:00.527126    4765 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:40:00.527158    4765 notify.go:220] Checking for updates...
	I0915 11:40:00.527415    4765 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:00.527423    4765 status.go:255] checking status of multinode-715000 ...
	I0915 11:40:00.527724    4765 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:40:00.527729    4765 status.go:343] host is not running, skipping remaining checks
	I0915 11:40:00.527732    4765 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (74.664041ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:07.353316    4767 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:07.353502    4767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:07.353507    4767 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:07.353510    4767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:07.353664    4767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:07.353820    4767 out.go:352] Setting JSON to false
	I0915 11:40:07.353835    4767 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:40:07.353878    4767 notify.go:220] Checking for updates...
	I0915 11:40:07.354123    4767 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:07.354131    4767 status.go:255] checking status of multinode-715000 ...
	I0915 11:40:07.354441    4767 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:40:07.354446    4767 status.go:343] host is not running, skipping remaining checks
	I0915 11:40:07.354448    4767 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr: exit status 7 (73.001625ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:20.817328    4770 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:20.817522    4770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:20.817527    4770 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:20.817530    4770 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:20.817702    4770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:20.817862    4770 out.go:352] Setting JSON to false
	I0915 11:40:20.817875    4770 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:40:20.817917    4770 notify.go:220] Checking for updates...
	I0915 11:40:20.818148    4770 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:20.818156    4770 status.go:255] checking status of multinode-715000 ...
	I0915 11:40:20.818481    4770 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:40:20.818485    4770 status.go:343] host is not running, skipping remaining checks
	I0915 11:40:20.818488    4770 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-715000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (33.394041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (38.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-715000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-715000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-715000: (1.778659416s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219072917s)

                                                
                                                
-- stdout --
	* [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	* Restarting existing qemu2 VM for "multinode-715000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-715000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:22.722940    4786 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:22.723087    4786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:22.723092    4786 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:22.723095    4786 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:22.723253    4786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:22.724403    4786 out.go:352] Setting JSON to false
	I0915 11:40:22.743528    4786 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4185,"bootTime":1726421437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:40:22.743604    4786 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:40:22.748329    4786 out.go:177] * [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:40:22.756372    4786 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:40:22.756415    4786 notify.go:220] Checking for updates...
	I0915 11:40:22.763306    4786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:40:22.766311    4786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:40:22.769335    4786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:40:22.772369    4786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:40:22.775340    4786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:40:22.778595    4786 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:22.778645    4786 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:40:22.783338    4786 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:40:22.790264    4786 start.go:297] selected driver: qemu2
	I0915 11:40:22.790269    4786 start.go:901] validating driver "qemu2" against &{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:40:22.790326    4786 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:40:22.792934    4786 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:40:22.793002    4786 cni.go:84] Creating CNI manager for ""
	I0915 11:40:22.793038    4786 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 11:40:22.793091    4786 start.go:340] cluster config:
	{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:40:22.796940    4786 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:22.804357    4786 out.go:177] * Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	I0915 11:40:22.808360    4786 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:40:22.808384    4786 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:40:22.808397    4786 cache.go:56] Caching tarball of preloaded images
	I0915 11:40:22.808463    4786 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:40:22.808477    4786 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:40:22.808536    4786 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/multinode-715000/config.json ...
	I0915 11:40:22.809025    4786 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:40:22.809064    4786 start.go:364] duration metric: took 32.333µs to acquireMachinesLock for "multinode-715000"
	I0915 11:40:22.809074    4786 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:40:22.809078    4786 fix.go:54] fixHost starting: 
	I0915 11:40:22.809206    4786 fix.go:112] recreateIfNeeded on multinode-715000: state=Stopped err=<nil>
	W0915 11:40:22.809215    4786 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:40:22.813291    4786 out.go:177] * Restarting existing qemu2 VM for "multinode-715000" ...
	I0915 11:40:22.821295    4786 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:40:22.821331    4786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0b:d1:ae:ed:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:40:22.823386    4786 main.go:141] libmachine: STDOUT: 
	I0915 11:40:22.823406    4786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:40:22.823440    4786 fix.go:56] duration metric: took 14.359834ms for fixHost
	I0915 11:40:22.823446    4786 start.go:83] releasing machines lock for "multinode-715000", held for 14.376792ms
	W0915 11:40:22.823452    4786 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:40:22.823487    4786 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:22.823492    4786 start.go:729] Will try again in 5 seconds ...
	I0915 11:40:27.825634    4786 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:40:27.826004    4786 start.go:364] duration metric: took 283.208µs to acquireMachinesLock for "multinode-715000"
	I0915 11:40:27.826129    4786 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:40:27.826148    4786 fix.go:54] fixHost starting: 
	I0915 11:40:27.826854    4786 fix.go:112] recreateIfNeeded on multinode-715000: state=Stopped err=<nil>
	W0915 11:40:27.826880    4786 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:40:27.831259    4786 out.go:177] * Restarting existing qemu2 VM for "multinode-715000" ...
	I0915 11:40:27.839151    4786 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:40:27.839380    4786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0b:d1:ae:ed:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:40:27.848409    4786 main.go:141] libmachine: STDOUT: 
	I0915 11:40:27.848479    4786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:40:27.848587    4786 fix.go:56] duration metric: took 22.441958ms for fixHost
	I0915 11:40:27.848609    4786 start.go:83] releasing machines lock for "multinode-715000", held for 22.584458ms
	W0915 11:40:27.848831    4786 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:27.854593    4786 out.go:201] 
	W0915 11:40:27.858233    4786 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:40:27.858257    4786 out.go:270] * 
	* 
	W0915 11:40:27.861003    4786 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:40:27.868207    4786 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-715000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-715000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (32.567ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 node delete m03: exit status 83 (40.982833ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-715000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-715000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-715000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr: exit status 7 (30.045041ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:28.053976    4800 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:28.054119    4800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:28.054123    4800 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:28.054126    4800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:28.054265    4800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:28.054386    4800 out.go:352] Setting JSON to false
	I0915 11:40:28.054399    4800 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:40:28.054456    4800 notify.go:220] Checking for updates...
	I0915 11:40:28.054607    4800 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:28.054612    4800 status.go:255] checking status of multinode-715000 ...
	I0915 11:40:28.054855    4800 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:40:28.054859    4800 status.go:343] host is not running, skipping remaining checks
	I0915 11:40:28.054861    4800 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.364708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-715000 stop: (3.43122275s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status: exit status 7 (66.464416ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr: exit status 7 (33.288291ms)

                                                
                                                
-- stdout --
	multinode-715000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:31.615929    4826 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:31.616088    4826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:31.616091    4826 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:31.616093    4826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:31.616222    4826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:31.616346    4826 out.go:352] Setting JSON to false
	I0915 11:40:31.616356    4826 mustload.go:65] Loading cluster: multinode-715000
	I0915 11:40:31.616415    4826 notify.go:220] Checking for updates...
	I0915 11:40:31.616564    4826 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:31.616570    4826 status.go:255] checking status of multinode-715000 ...
	I0915 11:40:31.616811    4826 status.go:330] multinode-715000 host status = "Stopped" (err=<nil>)
	I0915 11:40:31.616814    4826 status.go:343] host is not running, skipping remaining checks
	I0915 11:40:31.616816    4826 status.go:257] multinode-715000 status: &{Name:multinode-715000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr": multinode-715000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-715000 status --alsologtostderr": multinode-715000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.550916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.1717s)

                                                
                                                
-- stdout --
	* [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	* Restarting existing qemu2 VM for "multinode-715000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-715000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:31.676780    4830 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:31.676897    4830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:31.676901    4830 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:31.676903    4830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:31.677053    4830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:31.678050    4830 out.go:352] Setting JSON to false
	I0915 11:40:31.694207    4830 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4194,"bootTime":1726421437,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:40:31.694267    4830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:40:31.699250    4830 out.go:177] * [multinode-715000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:40:31.706213    4830 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:40:31.706274    4830 notify.go:220] Checking for updates...
	I0915 11:40:31.713175    4830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:40:31.716101    4830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:40:31.719187    4830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:40:31.722208    4830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:40:31.725174    4830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:40:31.728433    4830 config.go:182] Loaded profile config "multinode-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:31.728751    4830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:40:31.733215    4830 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:40:31.740107    4830 start.go:297] selected driver: qemu2
	I0915 11:40:31.740113    4830 start.go:901] validating driver "qemu2" against &{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:40:31.740167    4830 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:40:31.742575    4830 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:40:31.742598    4830 cni.go:84] Creating CNI manager for ""
	I0915 11:40:31.742629    4830 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0915 11:40:31.742670    4830 start.go:340] cluster config:
	{Name:multinode-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-715000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:40:31.746409    4830 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:31.753186    4830 out.go:177] * Starting "multinode-715000" primary control-plane node in "multinode-715000" cluster
	I0915 11:40:31.757242    4830 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:40:31.757257    4830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:40:31.757268    4830 cache.go:56] Caching tarball of preloaded images
	I0915 11:40:31.757332    4830 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:40:31.757338    4830 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:40:31.757393    4830 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/multinode-715000/config.json ...
	I0915 11:40:31.757859    4830 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:40:31.757890    4830 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "multinode-715000"
	I0915 11:40:31.757899    4830 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:40:31.757903    4830 fix.go:54] fixHost starting: 
	I0915 11:40:31.758022    4830 fix.go:112] recreateIfNeeded on multinode-715000: state=Stopped err=<nil>
	W0915 11:40:31.758031    4830 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:40:31.761104    4830 out.go:177] * Restarting existing qemu2 VM for "multinode-715000" ...
	I0915 11:40:31.765131    4830 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:40:31.765166    4830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0b:d1:ae:ed:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:40:31.767258    4830 main.go:141] libmachine: STDOUT: 
	I0915 11:40:31.767282    4830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:40:31.767315    4830 fix.go:56] duration metric: took 9.409792ms for fixHost
	I0915 11:40:31.767320    4830 start.go:83] releasing machines lock for "multinode-715000", held for 9.425958ms
	W0915 11:40:31.767326    4830 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:40:31.767368    4830 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:31.767373    4830 start.go:729] Will try again in 5 seconds ...
	I0915 11:40:36.769568    4830 start.go:360] acquireMachinesLock for multinode-715000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:40:36.769959    4830 start.go:364] duration metric: took 293µs to acquireMachinesLock for "multinode-715000"
	I0915 11:40:36.770078    4830 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:40:36.770100    4830 fix.go:54] fixHost starting: 
	I0915 11:40:36.770754    4830 fix.go:112] recreateIfNeeded on multinode-715000: state=Stopped err=<nil>
	W0915 11:40:36.770779    4830 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:40:36.775063    4830 out.go:177] * Restarting existing qemu2 VM for "multinode-715000" ...
	I0915 11:40:36.779099    4830 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:40:36.779380    4830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:0b:d1:ae:ed:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/multinode-715000/disk.qcow2
	I0915 11:40:36.787865    4830 main.go:141] libmachine: STDOUT: 
	I0915 11:40:36.787934    4830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:40:36.788001    4830 fix.go:56] duration metric: took 17.899875ms for fixHost
	I0915 11:40:36.788017    4830 start.go:83] releasing machines lock for "multinode-715000", held for 18.037959ms
	W0915 11:40:36.788224    4830 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-715000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:36.795095    4830 out.go:201] 
	W0915 11:40:36.799162    4830 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:40:36.799186    4830 out.go:270] * 
	* 
	W0915 11:40:36.801669    4830 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:40:36.808158    4830 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-715000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (68.900833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-715000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-715000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-715000-m01 --driver=qemu2 : exit status 80 (9.869892542s)

                                                
                                                
-- stdout --
	* [multinode-715000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-715000-m01" primary control-plane node in "multinode-715000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-715000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-715000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-715000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-715000-m02 --driver=qemu2 : exit status 80 (9.99070675s)

                                                
                                                
-- stdout --
	* [multinode-715000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-715000-m02" primary control-plane node in "multinode-715000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-715000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-715000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-715000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-715000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-715000: exit status 83 (77.799667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-715000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-715000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-715000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-715000 -n multinode-715000: exit status 7 (30.739791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-715000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.09s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-376000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-376000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.919970083s)

                                                
                                                
-- stdout --
	* [test-preload-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-376000" primary control-plane node in "test-preload-376000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-376000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:40:57.120295    4885 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:40:57.120420    4885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:57.120423    4885 out.go:358] Setting ErrFile to fd 2...
	I0915 11:40:57.120425    4885 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:40:57.120558    4885 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:40:57.121607    4885 out.go:352] Setting JSON to false
	I0915 11:40:57.137553    4885 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4220,"bootTime":1726421437,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:40:57.137626    4885 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:40:57.143690    4885 out.go:177] * [test-preload-376000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:40:57.152529    4885 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:40:57.152612    4885 notify.go:220] Checking for updates...
	I0915 11:40:57.159396    4885 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:40:57.162539    4885 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:40:57.165518    4885 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:40:57.166942    4885 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:40:57.170482    4885 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:40:57.173808    4885 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:40:57.173863    4885 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:40:57.178382    4885 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:40:57.185487    4885 start.go:297] selected driver: qemu2
	I0915 11:40:57.185494    4885 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:40:57.185501    4885 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:40:57.187774    4885 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:40:57.191550    4885 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:40:57.194566    4885 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:40:57.194585    4885 cni.go:84] Creating CNI manager for ""
	I0915 11:40:57.194606    4885 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:40:57.194613    4885 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:40:57.194640    4885 start.go:340] cluster config:
	{Name:test-preload-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:40:57.198382    4885 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.205403    4885 out.go:177] * Starting "test-preload-376000" primary control-plane node in "test-preload-376000" cluster
	I0915 11:40:57.209495    4885 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0915 11:40:57.209562    4885 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/test-preload-376000/config.json ...
	I0915 11:40:57.209576    4885 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/test-preload-376000/config.json: {Name:mk69d06f47d24c5d43a017b75c1971a391569a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:40:57.209594    4885 cache.go:107] acquiring lock: {Name:mka79e24e2c982fb86fd7d5b4b3af5a1b4af7d4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209594    4885 cache.go:107] acquiring lock: {Name:mk245377517910de1d63326d274ed2f98f105eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209607    4885 cache.go:107] acquiring lock: {Name:mk4fb181f29a884ebcb9402e25daf94ad90564ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209783    4885 cache.go:107] acquiring lock: {Name:mk8fa5368f4fd863e04d3e970228e6da2fb2dbff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209799    4885 cache.go:107] acquiring lock: {Name:mk0b986bac8a6b2080446cd2429907223266536c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209838    4885 cache.go:107] acquiring lock: {Name:mk0a3e3832c0a4236a3a641e02eab6fdba8dfdbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209845    4885 start.go:360] acquireMachinesLock for test-preload-376000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:40:57.209902    4885 start.go:364] duration metric: took 50.5µs to acquireMachinesLock for "test-preload-376000"
	I0915 11:40:57.209918    4885 cache.go:107] acquiring lock: {Name:mk6c0326d56efadf5f315b53523add8020459178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.209917    4885 start.go:93] Provisioning new machine with config: &{Name:test-preload-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:40:57.209943    4885 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:40:57.210095    4885 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:40:57.210116    4885 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0915 11:40:57.210116    4885 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0915 11:40:57.210121    4885 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0915 11:40:57.210096    4885 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0915 11:40:57.210123    4885 cache.go:107] acquiring lock: {Name:mkae72fce9d90660f517ceca5febdd50201b49b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:40:57.210469    4885 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:40:57.210484    4885 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0915 11:40:57.210517    4885 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:40:57.214489    4885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:40:57.218958    4885 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:40:57.221751    4885 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0915 11:40:57.221799    4885 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0915 11:40:57.221938    4885 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:40:57.222193    4885 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0915 11:40:57.222237    4885 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0915 11:40:57.223998    4885 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:40:57.224067    4885 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0915 11:40:57.232487    4885 start.go:159] libmachine.API.Create for "test-preload-376000" (driver="qemu2")
	I0915 11:40:57.232508    4885 client.go:168] LocalClient.Create starting
	I0915 11:40:57.232627    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:40:57.232657    4885 main.go:141] libmachine: Decoding PEM data...
	I0915 11:40:57.232665    4885 main.go:141] libmachine: Parsing certificate...
	I0915 11:40:57.232703    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:40:57.232726    4885 main.go:141] libmachine: Decoding PEM data...
	I0915 11:40:57.232735    4885 main.go:141] libmachine: Parsing certificate...
	I0915 11:40:57.233058    4885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:40:57.394239    4885 main.go:141] libmachine: Creating SSH key...
	I0915 11:40:57.487958    4885 main.go:141] libmachine: Creating Disk image...
	I0915 11:40:57.487990    4885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:40:57.488164    4885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:40:57.497466    4885 main.go:141] libmachine: STDOUT: 
	I0915 11:40:57.497485    4885 main.go:141] libmachine: STDERR: 
	I0915 11:40:57.497541    4885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2 +20000M
	I0915 11:40:57.506754    4885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:40:57.506775    4885 main.go:141] libmachine: STDERR: 
	I0915 11:40:57.506791    4885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:40:57.506796    4885 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:40:57.506810    4885 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:40:57.506840    4885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0b:d1:99:d4:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:40:57.509021    4885 main.go:141] libmachine: STDOUT: 
	I0915 11:40:57.509051    4885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:40:57.509070    4885 client.go:171] duration metric: took 276.55975ms to LocalClient.Create
	I0915 11:40:57.658858    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0915 11:40:57.689603    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0915 11:40:57.691368    4885 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0915 11:40:57.691420    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0915 11:40:57.710279    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0915 11:40:57.727457    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0915 11:40:57.741125    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0915 11:40:57.806869    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0915 11:40:57.849534    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0915 11:40:57.849583    4885 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 639.823791ms
	I0915 11:40:57.849614    4885 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0915 11:40:58.356037    4885 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0915 11:40:58.356151    4885 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0915 11:40:59.107027    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0915 11:40:59.107071    4885 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.897498625s
	I0915 11:40:59.107099    4885 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0915 11:40:59.281859    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0915 11:40:59.281913    4885 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.072086042s
	I0915 11:40:59.281939    4885 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0915 11:40:59.509374    4885 start.go:128] duration metric: took 2.299430708s to createHost
	I0915 11:40:59.509441    4885 start.go:83] releasing machines lock for "test-preload-376000", held for 2.299554916s
	W0915 11:40:59.509503    4885 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:59.521552    4885 out.go:177] * Deleting "test-preload-376000" in qemu2 ...
	W0915 11:40:59.554295    4885 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:40:59.554329    4885 start.go:729] Will try again in 5 seconds ...
	I0915 11:40:59.790237    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0915 11:40:59.790285    4885 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.580726959s
	I0915 11:40:59.790312    4885 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0915 11:41:02.166640    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0915 11:41:02.166734    4885 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.957207667s
	I0915 11:41:02.166767    4885 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0915 11:41:02.608938    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0915 11:41:02.608988    4885 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.398926167s
	I0915 11:41:02.609018    4885 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0915 11:41:02.686727    4885 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0915 11:41:02.686772    4885 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.477019166s
	I0915 11:41:02.686838    4885 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0915 11:41:04.554653    4885 start.go:360] acquireMachinesLock for test-preload-376000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:41:04.555076    4885 start.go:364] duration metric: took 352.292µs to acquireMachinesLock for "test-preload-376000"
	I0915 11:41:04.555188    4885 start.go:93] Provisioning new machine with config: &{Name:test-preload-376000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:41:04.555394    4885 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:41:04.565165    4885 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:41:04.617942    4885 start.go:159] libmachine.API.Create for "test-preload-376000" (driver="qemu2")
	I0915 11:41:04.618029    4885 client.go:168] LocalClient.Create starting
	I0915 11:41:04.618165    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:41:04.618241    4885 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:04.618261    4885 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:04.618330    4885 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:41:04.618376    4885 main.go:141] libmachine: Decoding PEM data...
	I0915 11:41:04.618403    4885 main.go:141] libmachine: Parsing certificate...
	I0915 11:41:04.618909    4885 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:41:04.785503    4885 main.go:141] libmachine: Creating SSH key...
	I0915 11:41:04.930552    4885 main.go:141] libmachine: Creating Disk image...
	I0915 11:41:04.930559    4885 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:41:04.930743    4885 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:41:04.940271    4885 main.go:141] libmachine: STDOUT: 
	I0915 11:41:04.940295    4885 main.go:141] libmachine: STDERR: 
	I0915 11:41:04.940364    4885 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2 +20000M
	I0915 11:41:04.948762    4885 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:41:04.948777    4885 main.go:141] libmachine: STDERR: 
	I0915 11:41:04.948787    4885 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:41:04.948795    4885 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:41:04.948805    4885 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:41:04.948838    4885 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:80:aa:13:dd:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/test-preload-376000/disk.qcow2
	I0915 11:41:04.950609    4885 main.go:141] libmachine: STDOUT: 
	I0915 11:41:04.950622    4885 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:41:04.950635    4885 client.go:171] duration metric: took 332.605041ms to LocalClient.Create
	I0915 11:41:06.952607    4885 start.go:128] duration metric: took 2.397213333s to createHost
	I0915 11:41:06.952661    4885 start.go:83] releasing machines lock for "test-preload-376000", held for 2.397589875s
	W0915 11:41:06.952882    4885 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:41:06.971586    4885 out.go:201] 
	W0915 11:41:06.976619    4885 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:41:06.976675    4885 out.go:270] * 
	* 
	W0915 11:41:06.979290    4885 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:41:06.993413    4885 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-376000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-15 11:41:07.014097 -0700 PDT m=+2732.862785709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-376000 -n test-preload-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-376000 -n test-preload-376000: exit status 7 (67.877125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-376000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-376000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-376000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-151000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-151000 --memory=2048 --driver=qemu2 : exit status 80 (9.911544333s)

                                                
                                                
-- stdout --
	* [scheduled-stop-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-151000" primary control-plane node in "scheduled-stop-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-151000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-151000" primary control-plane node in "scheduled-stop-151000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-151000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-151000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-15 11:41:17.075467 -0700 PDT m=+2742.924278834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-151000 -n scheduled-stop-151000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-151000 -n scheduled-stop-151000: exit status 7 (71.454042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-151000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-151000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-151000
--- FAIL: TestScheduledStopUnix (10.07s)

                                                
                                    
x
+
TestSkaffold (12.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1966076627 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1966076627 version: (1.078162083s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-998000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-998000 --memory=2600 --driver=qemu2 : exit status 80 (9.951105583s)

                                                
                                                
-- stdout --
	* [skaffold-998000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-998000" primary control-plane node in "skaffold-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-998000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-998000" primary control-plane node in "skaffold-998000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-998000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-998000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-15 11:41:29.764333 -0700 PDT m=+2755.613300667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-998000 -n skaffold-998000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-998000 -n skaffold-998000: exit status 7 (62.079458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-998000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-998000
--- FAIL: TestSkaffold (12.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (588.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3484948275 start -p running-upgrade-196000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3484948275 start -p running-upgrade-196000 --memory=2200 --vm-driver=qemu2 : (48.474512708s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-196000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0915 11:44:13.134592    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:44:29.796499    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-196000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.976464667s)

                                                
                                                
-- stdout --
	* [running-upgrade-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-196000" primary control-plane node in "running-upgrade-196000" cluster
	* Updating the running qemu2 "running-upgrade-196000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:43:03.664725    5283 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:43:03.664879    5283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:43:03.664883    5283 out.go:358] Setting ErrFile to fd 2...
	I0915 11:43:03.664886    5283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:43:03.665028    5283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:43:03.666173    5283 out.go:352] Setting JSON to false
	I0915 11:43:03.683195    5283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4346,"bootTime":1726421437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:43:03.683272    5283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:43:03.688773    5283 out.go:177] * [running-upgrade-196000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:43:03.696798    5283 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:43:03.696868    5283 notify.go:220] Checking for updates...
	I0915 11:43:03.703730    5283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:43:03.706779    5283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:43:03.709799    5283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:43:03.714713    5283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:43:03.722780    5283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:43:03.726077    5283 config.go:182] Loaded profile config "running-upgrade-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:43:03.729739    5283 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 11:43:03.732753    5283 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:43:03.736781    5283 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:43:03.743771    5283 start.go:297] selected driver: qemu2
	I0915 11:43:03.743777    5283 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:43:03.743825    5283 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:43:03.745976    5283 cni.go:84] Creating CNI manager for ""
	I0915 11:43:03.746008    5283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:43:03.746031    5283 start.go:340] cluster config:
	{Name:running-upgrade-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:43:03.746078    5283 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:43:03.752748    5283 out.go:177] * Starting "running-upgrade-196000" primary control-plane node in "running-upgrade-196000" cluster
	I0915 11:43:03.756774    5283 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:43:03.756786    5283 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0915 11:43:03.756792    5283 cache.go:56] Caching tarball of preloaded images
	I0915 11:43:03.756842    5283 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:43:03.756847    5283 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0915 11:43:03.756896    5283 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/config.json ...
	I0915 11:43:03.757254    5283 start.go:360] acquireMachinesLock for running-upgrade-196000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:43:03.757280    5283 start.go:364] duration metric: took 21.666µs to acquireMachinesLock for "running-upgrade-196000"
	I0915 11:43:03.757288    5283 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:43:03.757293    5283 fix.go:54] fixHost starting: 
	I0915 11:43:03.757890    5283 fix.go:112] recreateIfNeeded on running-upgrade-196000: state=Running err=<nil>
	W0915 11:43:03.757898    5283 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:43:03.766795    5283 out.go:177] * Updating the running qemu2 "running-upgrade-196000" VM ...
	I0915 11:43:03.770747    5283 machine.go:93] provisionDockerMachine start ...
	I0915 11:43:03.770780    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:03.770883    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:03.770888    5283 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 11:43:03.819457    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-196000
	
	I0915 11:43:03.819469    5283 buildroot.go:166] provisioning hostname "running-upgrade-196000"
	I0915 11:43:03.819514    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:03.819613    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:03.819618    5283 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-196000 && echo "running-upgrade-196000" | sudo tee /etc/hostname
	I0915 11:43:03.878646    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-196000
	
	I0915 11:43:03.878711    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:03.878820    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:03.878832    5283 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-196000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-196000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-196000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 11:43:03.929980    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 11:43:03.929995    5283 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1650/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1650/.minikube}
	I0915 11:43:03.930024    5283 buildroot.go:174] setting up certificates
	I0915 11:43:03.930028    5283 provision.go:84] configureAuth start
	I0915 11:43:03.930032    5283 provision.go:143] copyHostCerts
	I0915 11:43:03.930100    5283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem, removing ...
	I0915 11:43:03.930111    5283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem
	I0915 11:43:03.930242    5283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem (1078 bytes)
	I0915 11:43:03.930419    5283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem, removing ...
	I0915 11:43:03.930423    5283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem
	I0915 11:43:03.930468    5283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem (1123 bytes)
	I0915 11:43:03.930573    5283 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem, removing ...
	I0915 11:43:03.930576    5283 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem
	I0915 11:43:03.930627    5283 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem (1679 bytes)
	I0915 11:43:03.930704    5283 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-196000 san=[127.0.0.1 localhost minikube running-upgrade-196000]
	I0915 11:43:04.076460    5283 provision.go:177] copyRemoteCerts
	I0915 11:43:04.076514    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 11:43:04.076525    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:43:04.104414    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 11:43:04.112114    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0915 11:43:04.118874    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 11:43:04.125694    5283 provision.go:87] duration metric: took 195.664083ms to configureAuth
	I0915 11:43:04.125702    5283 buildroot.go:189] setting minikube options for container-runtime
	I0915 11:43:04.125818    5283 config.go:182] Loaded profile config "running-upgrade-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:43:04.125853    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:04.125940    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:04.125950    5283 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 11:43:04.177939    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0915 11:43:04.177958    5283 buildroot.go:70] root file system type: tmpfs
	I0915 11:43:04.178032    5283 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 11:43:04.178094    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:04.178219    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:04.178253    5283 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 11:43:04.234065    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 11:43:04.234129    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:04.234250    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:04.234259    5283 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 11:43:04.286373    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 11:43:04.286384    5283 machine.go:96] duration metric: took 515.637292ms to provisionDockerMachine
	I0915 11:43:04.286389    5283 start.go:293] postStartSetup for "running-upgrade-196000" (driver="qemu2")
	I0915 11:43:04.286396    5283 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 11:43:04.286448    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 11:43:04.286457    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:43:04.315299    5283 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 11:43:04.316762    5283 info.go:137] Remote host: Buildroot 2021.02.12
	I0915 11:43:04.316770    5283 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/addons for local assets ...
	I0915 11:43:04.316838    5283 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/files for local assets ...
	I0915 11:43:04.316932    5283 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0915 11:43:04.317022    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 11:43:04.319619    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:43:04.326442    5283 start.go:296] duration metric: took 40.048083ms for postStartSetup
	I0915 11:43:04.326456    5283 fix.go:56] duration metric: took 569.17075ms for fixHost
	I0915 11:43:04.326496    5283 main.go:141] libmachine: Using SSH client type: native
	I0915 11:43:04.326603    5283 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102715190] 0x1027179d0 <nil>  [] 0s} localhost 50278 <nil> <nil>}
	I0915 11:43:04.326611    5283 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 11:43:04.379107    5283 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726425784.453873387
	
	I0915 11:43:04.379115    5283 fix.go:216] guest clock: 1726425784.453873387
	I0915 11:43:04.379119    5283 fix.go:229] Guest: 2024-09-15 11:43:04.453873387 -0700 PDT Remote: 2024-09-15 11:43:04.326457 -0700 PDT m=+0.681758293 (delta=127.416387ms)
	I0915 11:43:04.379130    5283 fix.go:200] guest clock delta is within tolerance: 127.416387ms
	I0915 11:43:04.379133    5283 start.go:83] releasing machines lock for "running-upgrade-196000", held for 621.855791ms
	I0915 11:43:04.379199    5283 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 11:43:04.379218    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:43:04.379199    5283 ssh_runner.go:195] Run: cat /version.json
	I0915 11:43:04.379232    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	W0915 11:43:04.379767    5283 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50278: connect: connection refused
	I0915 11:43:04.379790    5283 retry.go:31] will retry after 313.428133ms: dial tcp [::1]:50278: connect: connection refused
	W0915 11:43:04.405949    5283 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0915 11:43:04.405996    5283 ssh_runner.go:195] Run: systemctl --version
	I0915 11:43:04.407894    5283 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 11:43:04.409559    5283 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 11:43:04.409583    5283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0915 11:43:04.412816    5283 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0915 11:43:04.417260    5283 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 11:43:04.417266    5283 start.go:495] detecting cgroup driver to use...
	I0915 11:43:04.417329    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:43:04.422637    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0915 11:43:04.426042    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 11:43:04.429354    5283 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 11:43:04.429385    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 11:43:04.432838    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:43:04.435601    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 11:43:04.438470    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:43:04.441922    5283 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 11:43:04.445401    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 11:43:04.448986    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 11:43:04.451874    5283 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 11:43:04.454910    5283 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 11:43:04.458173    5283 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 11:43:04.461301    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:04.563964    5283 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 11:43:04.572533    5283 start.go:495] detecting cgroup driver to use...
	I0915 11:43:04.572613    5283 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 11:43:04.578131    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:43:04.584104    5283 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 11:43:04.589721    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:43:04.594887    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 11:43:04.600058    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:43:04.605824    5283 ssh_runner.go:195] Run: which cri-dockerd
	I0915 11:43:04.607117    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 11:43:04.610126    5283 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0915 11:43:04.615104    5283 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 11:43:04.706975    5283 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 11:43:04.804962    5283 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 11:43:04.805023    5283 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 11:43:04.810828    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:04.899962    5283 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:43:07.494295    5283 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.594342834s)
	I0915 11:43:07.494372    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 11:43:07.499692    5283 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0915 11:43:07.505933    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:43:07.510687    5283 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 11:43:07.590136    5283 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 11:43:07.671107    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:07.748914    5283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 11:43:07.755311    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:43:07.759930    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:07.847661    5283 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 11:43:07.887851    5283 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 11:43:07.887951    5283 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 11:43:07.890077    5283 start.go:563] Will wait 60s for crictl version
	I0915 11:43:07.890131    5283 ssh_runner.go:195] Run: which crictl
	I0915 11:43:07.891678    5283 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 11:43:07.903646    5283 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0915 11:43:07.903721    5283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:43:07.924149    5283 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:43:07.945859    5283 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0915 11:43:07.945995    5283 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0915 11:43:07.947318    5283 kubeadm.go:883] updating cluster {Name:running-upgrade-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0915 11:43:07.947361    5283 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:43:07.947410    5283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:43:07.958182    5283 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:43:07.958190    5283 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:43:07.958246    5283 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:43:07.961812    5283 ssh_runner.go:195] Run: which lz4
	I0915 11:43:07.963306    5283 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 11:43:07.964507    5283 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 11:43:07.964515    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0915 11:43:08.910628    5283 docker.go:649] duration metric: took 947.371084ms to copy over tarball
	I0915 11:43:08.910699    5283 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 11:43:10.022968    5283 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.112269208s)
	I0915 11:43:10.022982    5283 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 11:43:10.038712    5283 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:43:10.042242    5283 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0915 11:43:10.047415    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:10.124441    5283 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:43:11.326149    5283 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.201707584s)
	I0915 11:43:11.326273    5283 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:43:11.340338    5283 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:43:11.340347    5283 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:43:11.340352    5283 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 11:43:11.344880    5283 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:43:11.346725    5283 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:43:11.349371    5283 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:43:11.349505    5283 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:43:11.351401    5283 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:43:11.351917    5283 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:43:11.353477    5283 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:43:11.353517    5283 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:43:11.354997    5283 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0915 11:43:11.355101    5283 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:43:11.356161    5283 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:43:11.356849    5283 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:43:11.358062    5283 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0915 11:43:11.358107    5283 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:43:11.358522    5283 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:43:11.360009    5283 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:43:11.735352    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:43:11.748386    5283 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0915 11:43:11.748416    5283 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:43:11.748479    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:43:11.758624    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0915 11:43:11.766581    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:43:11.776773    5283 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0915 11:43:11.776793    5283 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:43:11.776853    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:43:11.786673    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0915 11:43:11.795305    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:43:11.797037    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0915 11:43:11.797788    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0915 11:43:11.806844    5283 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0915 11:43:11.806867    5283 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:43:11.806928    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	W0915 11:43:11.817515    5283 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0915 11:43:11.817670    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:43:11.820626    5283 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0915 11:43:11.820641    5283 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0915 11:43:11.820645    5283 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0915 11:43:11.820652    5283 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:43:11.820696    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0915 11:43:11.820697    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0915 11:43:11.821305    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0915 11:43:11.835601    5283 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0915 11:43:11.835628    5283 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:43:11.835691    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:43:11.840464    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0915 11:43:11.848552    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0915 11:43:11.848675    5283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0915 11:43:11.851909    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0915 11:43:11.851927    5283 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0915 11:43:11.851937    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0915 11:43:11.852014    5283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:43:11.854378    5283 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0915 11:43:11.854396    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0915 11:43:11.862711    5283 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0915 11:43:11.862725    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0915 11:43:11.869298    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:43:11.933356    5283 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0915 11:43:11.933376    5283 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:43:11.933382    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0915 11:43:11.933407    5283 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0915 11:43:11.933426    5283 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:43:11.933491    5283 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:43:11.978208    5283 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0915 11:43:11.978228    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0915 11:43:12.241492    5283 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0915 11:43:12.242132    5283 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:43:12.280296    5283 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0915 11:43:12.280338    5283 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:43:12.280489    5283 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:43:13.662611    5283 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.382101958s)
	I0915 11:43:13.662646    5283 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0915 11:43:13.663034    5283 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:43:13.668293    5283 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0915 11:43:13.668336    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0915 11:43:13.724005    5283 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:43:13.724019    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0915 11:43:13.961185    5283 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0915 11:43:13.961228    5283 cache_images.go:92] duration metric: took 2.620901625s to LoadCachedImages
	W0915 11:43:13.961267    5283 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0915 11:43:13.961277    5283 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0915 11:43:13.961346    5283 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-196000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 11:43:13.961428    5283 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 11:43:13.974620    5283 cni.go:84] Creating CNI manager for ""
	I0915 11:43:13.974634    5283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:43:13.974639    5283 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 11:43:13.974647    5283 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-196000 NodeName:running-upgrade-196000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 11:43:13.974712    5283 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-196000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 11:43:13.974776    5283 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0915 11:43:13.977794    5283 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 11:43:13.977821    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 11:43:13.981103    5283 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0915 11:43:13.986174    5283 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 11:43:13.990762    5283 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0915 11:43:13.996259    5283 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0915 11:43:13.997650    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:43:14.076802    5283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:43:14.082005    5283 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000 for IP: 10.0.2.15
	I0915 11:43:14.082014    5283 certs.go:194] generating shared ca certs ...
	I0915 11:43:14.082022    5283 certs.go:226] acquiring lock for ca certs: {Name:mkae14c7548e7e09ac75f5a854dc2935289ebc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:43:14.082179    5283 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key
	I0915 11:43:14.082229    5283 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key
	I0915 11:43:14.082235    5283 certs.go:256] generating profile certs ...
	I0915 11:43:14.082313    5283 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.key
	I0915 11:43:14.082329    5283 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key.00b87395
	I0915 11:43:14.082342    5283 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt.00b87395 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0915 11:43:14.123968    5283 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt.00b87395 ...
	I0915 11:43:14.123973    5283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt.00b87395: {Name:mk4de3dca6c863b121ea3a6985fbf3b256c653b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:43:14.125403    5283 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key.00b87395 ...
	I0915 11:43:14.125409    5283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key.00b87395: {Name:mkb7e96fdc1013b365e1ead7dadae8424ef7d5c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:43:14.125577    5283 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt.00b87395 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt
	I0915 11:43:14.125716    5283 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key.00b87395 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key
	I0915 11:43:14.125879    5283 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/proxy-client.key
	I0915 11:43:14.126020    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem (1338 bytes)
	W0915 11:43:14.126048    5283 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0915 11:43:14.126054    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 11:43:14.126081    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem (1078 bytes)
	I0915 11:43:14.126113    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem (1123 bytes)
	I0915 11:43:14.126138    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem (1679 bytes)
	I0915 11:43:14.126193    5283 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:43:14.126556    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 11:43:14.133998    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 11:43:14.141936    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 11:43:14.149716    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 11:43:14.157682    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 11:43:14.164955    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 11:43:14.171582    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 11:43:14.178311    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0915 11:43:14.185714    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 11:43:14.192987    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0915 11:43:14.199534    5283 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0915 11:43:14.206238    5283 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 11:43:14.215416    5283 ssh_runner.go:195] Run: openssl version
	I0915 11:43:14.220869    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 11:43:14.224538    5283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:43:14.226430    5283 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:43:14.226461    5283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:43:14.229202    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 11:43:14.232897    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0915 11:43:14.237113    5283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0915 11:43:14.238654    5283 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:11 /usr/share/ca-certificates/2174.pem
	I0915 11:43:14.238679    5283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0915 11:43:14.240740    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0915 11:43:14.243332    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0915 11:43:14.246541    5283 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0915 11:43:14.248008    5283 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:11 /usr/share/ca-certificates/21742.pem
	I0915 11:43:14.248031    5283 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0915 11:43:14.249695    5283 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 11:43:14.252723    5283 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 11:43:14.254228    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 11:43:14.256183    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 11:43:14.257873    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 11:43:14.259707    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 11:43:14.261739    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 11:43:14.263631    5283 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 11:43:14.265472    5283 kubeadm.go:392] StartCluster: {Name:running-upgrade-196000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50310 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-196000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:43:14.265546    5283 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:43:14.275823    5283 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 11:43:14.279039    5283 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 11:43:14.279048    5283 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 11:43:14.279071    5283 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 11:43:14.282624    5283 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:43:14.282865    5283 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-196000" does not appear in /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:43:14.282913    5283 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1650/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-196000" cluster setting kubeconfig missing "running-upgrade-196000" context setting]
	I0915 11:43:14.283050    5283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:43:14.283730    5283 kapi.go:59] client config for running-upgrade-196000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ced800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:43:14.284089    5283 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 11:43:14.286881    5283 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-196000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0915 11:43:14.286886    5283 kubeadm.go:1160] stopping kube-system containers ...
	I0915 11:43:14.286933    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:43:14.297834    5283 docker.go:483] Stopping containers: [57f3b18e3835 5b1c52582295 c473839de3b9 385601921d09 a5e082780bcb 641fb718dc87 14c778f2bdc2 e340e83e6dee 3373156fd94c 4749a7775209 9fbf46ad5e75 73201fec7e66 b690e81bc1d2 2d1eabbdc3dd]
	I0915 11:43:14.297915    5283 ssh_runner.go:195] Run: docker stop 57f3b18e3835 5b1c52582295 c473839de3b9 385601921d09 a5e082780bcb 641fb718dc87 14c778f2bdc2 e340e83e6dee 3373156fd94c 4749a7775209 9fbf46ad5e75 73201fec7e66 b690e81bc1d2 2d1eabbdc3dd
	I0915 11:43:14.308628    5283 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 11:43:14.394792    5283 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:43:14.398783    5283 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 15 18:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Sep 15 18:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 15 18:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 15 18:42 /etc/kubernetes/scheduler.conf
	
	I0915 11:43:14.398824    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf
	I0915 11:43:14.402176    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:43:14.402205    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:43:14.405584    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf
	I0915 11:43:14.408701    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:43:14.408725    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:43:14.411752    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf
	I0915 11:43:14.414277    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:43:14.414298    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:43:14.417342    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf
	I0915 11:43:14.420166    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:43:14.420193    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:43:14.422648    5283 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:43:14.425787    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:43:14.449291    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:43:15.098531    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:43:15.301828    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:43:15.325832    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:43:15.345721    5283 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:43:15.345808    5283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:43:15.848250    5283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:43:16.347877    5283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:43:16.352444    5283 api_server.go:72] duration metric: took 1.006740666s to wait for apiserver process to appear ...
	I0915 11:43:16.352454    5283 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:43:16.352473    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:21.354481    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:21.354521    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:26.354852    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:26.354958    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:31.356319    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:31.356411    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:36.357768    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:36.357872    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:41.359738    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:41.359842    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:46.362091    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:46.362236    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:51.364973    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:51.365104    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:43:56.367820    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:43:56.367915    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:01.370613    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:01.370707    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:06.371968    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:06.372074    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:11.374839    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:11.374932    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:16.377609    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:16.378047    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:16.410077    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:16.410245    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:16.427884    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:16.428014    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:16.441828    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:16.441940    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:16.453465    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:16.453554    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:16.464195    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:16.464291    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:16.475046    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:16.475125    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:16.484970    5283 logs.go:276] 0 containers: []
	W0915 11:44:16.484981    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:16.485056    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:16.495176    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:16.495209    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:16.495215    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:16.513882    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:16.513892    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:16.527849    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:16.527858    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:16.565041    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:16.565048    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:16.637664    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:16.637675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:16.657972    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:16.657983    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:16.668851    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:16.668863    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:16.685934    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:16.685944    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:16.697360    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:16.697370    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:16.708579    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:16.708590    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:16.725542    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:16.725553    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:16.751449    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:16.751459    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:16.756000    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:16.756008    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:16.774004    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:16.774016    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:16.785181    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:16.785193    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:16.801133    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:16.801145    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:16.812826    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:16.812836    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:19.327716    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:24.330501    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:24.331100    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:24.371242    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:24.371404    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:24.392467    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:24.392614    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:24.407384    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:24.407478    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:24.424652    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:24.424738    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:24.435454    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:24.435556    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:24.445943    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:24.446021    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:24.456168    5283 logs.go:276] 0 containers: []
	W0915 11:44:24.456183    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:24.456254    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:24.466717    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:24.466732    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:24.466737    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:24.478847    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:24.478860    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:24.516775    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:24.516783    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:24.527819    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:24.527832    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:24.539944    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:24.539957    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:24.551495    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:24.551508    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:24.562820    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:24.562833    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:24.588903    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:24.588913    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:24.592879    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:24.592885    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:24.628858    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:24.628868    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:24.644252    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:24.644263    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:24.659060    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:24.659131    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:24.681899    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:24.681914    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:24.692851    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:24.692864    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:24.712351    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:24.712363    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:24.729706    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:24.729715    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:24.741244    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:24.741256    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:27.254879    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:32.257247    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:32.257806    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:32.290480    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:32.290652    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:32.311357    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:32.311468    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:32.325712    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:32.325803    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:32.337735    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:32.337823    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:32.348278    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:32.348354    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:32.358834    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:32.358915    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:32.368969    5283 logs.go:276] 0 containers: []
	W0915 11:44:32.368983    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:32.369059    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:32.379187    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:32.379206    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:32.379211    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:32.391071    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:32.391082    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:32.402987    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:32.402998    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:32.420300    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:32.420309    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:32.431503    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:32.431511    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:32.442548    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:32.442562    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:32.478472    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:32.478485    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:32.500645    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:32.500655    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:32.523252    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:32.523264    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:32.527830    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:32.527840    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:32.541518    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:32.541530    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:32.553110    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:32.553120    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:32.587998    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:32.588005    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:32.604733    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:32.604742    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:32.629109    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:32.629115    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:32.642274    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:32.642285    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:32.653338    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:32.653365    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:35.168441    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:40.171291    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:40.171903    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:40.212675    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:40.212845    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:40.233072    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:40.233204    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:40.247709    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:40.247803    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:40.260053    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:40.260138    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:40.270335    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:40.270419    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:40.280576    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:40.280659    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:40.290405    5283 logs.go:276] 0 containers: []
	W0915 11:44:40.290416    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:40.290486    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:40.300924    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:40.300943    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:40.300948    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:40.337890    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:40.337899    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:40.342651    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:40.342658    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:40.367390    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:40.367405    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:40.378648    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:40.378659    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:40.390354    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:40.390363    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:40.407382    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:40.407394    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:40.450841    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:40.450856    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:40.462118    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:40.462128    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:40.474190    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:40.474204    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:40.485807    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:40.485826    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:40.497855    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:40.497873    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:40.519380    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:40.519390    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:40.532745    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:40.532756    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:40.550529    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:40.550543    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:40.562223    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:40.562233    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:40.573516    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:40.573527    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:43.100119    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:48.102438    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:48.102728    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:48.122734    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:48.122854    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:48.137460    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:48.137550    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:48.149731    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:48.149813    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:48.160717    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:48.160800    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:48.172633    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:48.172717    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:48.192959    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:48.193034    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:48.203293    5283 logs.go:276] 0 containers: []
	W0915 11:44:48.203309    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:48.203378    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:48.213242    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:48.213258    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:48.213264    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:48.238598    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:48.238606    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:48.275993    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:48.276002    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:48.289464    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:48.289476    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:48.306818    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:48.306831    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:48.319020    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:48.319035    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:48.334183    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:48.334194    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:48.345627    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:48.345640    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:48.357112    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:48.357126    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:48.368671    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:48.368684    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:48.390060    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:48.390070    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:48.407167    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:48.407177    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:48.418413    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:48.418425    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:48.430615    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:48.430628    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:48.435387    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:48.435397    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:48.469673    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:48.469686    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:48.489377    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:48.489388    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:51.003548    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:44:56.004951    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:44:56.005598    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:44:56.045080    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:44:56.045243    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:44:56.066436    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:44:56.066546    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:44:56.081598    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:44:56.081673    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:44:56.094042    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:44:56.094127    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:44:56.106011    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:44:56.106086    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:44:56.116828    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:44:56.116900    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:44:56.126709    5283 logs.go:276] 0 containers: []
	W0915 11:44:56.126718    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:44:56.126778    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:44:56.140804    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:44:56.140820    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:44:56.140825    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:44:56.152259    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:44:56.152272    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:44:56.163258    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:44:56.163269    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:44:56.187709    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:44:56.187716    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:44:56.191860    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:44:56.191873    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:44:56.215965    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:44:56.215977    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:44:56.229904    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:44:56.229914    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:44:56.241642    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:44:56.241657    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:44:56.276763    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:44:56.276773    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:44:56.300747    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:44:56.300774    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:44:56.323923    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:44:56.323933    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:44:56.335511    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:44:56.335522    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:44:56.346724    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:44:56.346733    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:44:56.359579    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:44:56.359592    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:44:56.395643    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:44:56.395657    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:44:56.413527    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:44:56.413538    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:44:56.424082    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:44:56.424095    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:44:58.937731    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:03.940572    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:03.941082    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:03.985460    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:03.985626    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:04.006159    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:04.006295    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:04.022581    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:04.022677    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:04.035344    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:04.035434    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:04.046140    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:04.046217    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:04.056767    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:04.056850    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:04.066911    5283 logs.go:276] 0 containers: []
	W0915 11:45:04.066926    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:04.066990    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:04.077535    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:04.077552    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:04.077559    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:04.082638    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:04.082648    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:04.100698    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:04.100708    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:04.126326    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:04.126335    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:04.163363    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:04.163379    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:04.177653    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:04.177663    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:04.192199    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:04.192212    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:04.202669    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:04.202680    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:04.214345    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:04.214356    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:04.225810    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:04.225820    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:04.246807    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:04.246817    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:04.258803    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:04.258814    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:04.270495    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:04.270507    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:04.281660    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:04.281670    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:04.316903    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:04.316915    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:04.337008    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:04.337019    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:04.348978    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:04.348990    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:06.861673    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:11.863067    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:11.863542    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:11.900223    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:11.900387    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:11.921350    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:11.921482    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:11.936108    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:11.936195    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:11.948464    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:11.948539    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:11.959773    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:11.959883    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:11.973347    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:11.973438    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:11.989788    5283 logs.go:276] 0 containers: []
	W0915 11:45:11.989802    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:11.989876    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:12.001100    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:12.001119    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:12.001125    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:12.012338    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:12.012350    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:12.037333    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:12.037340    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:12.074489    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:12.074503    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:12.079387    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:12.079394    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:12.092929    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:12.092943    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:12.111403    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:12.111416    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:12.123131    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:12.123144    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:12.134810    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:12.134821    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:12.151017    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:12.151028    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:12.171827    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:12.171840    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:12.183937    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:12.183951    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:12.196600    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:12.196613    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:12.208296    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:12.208306    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:12.243327    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:12.243338    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:12.257352    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:12.257363    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:12.274442    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:12.274455    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:14.788608    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:19.790936    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:19.791499    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:19.831239    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:19.831407    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:19.852474    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:19.852591    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:19.872992    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:19.873085    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:19.884159    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:19.884241    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:19.895667    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:19.895742    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:19.906574    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:19.906649    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:19.916302    5283 logs.go:276] 0 containers: []
	W0915 11:45:19.916317    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:19.916391    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:19.926832    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:19.926850    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:19.926855    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:19.939380    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:19.939392    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:19.957095    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:19.957109    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:19.970076    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:19.970088    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:19.983116    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:19.983132    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:19.998636    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:19.998652    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:20.020418    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:20.020433    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:20.044351    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:20.044370    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:20.058570    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:20.058587    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:20.071753    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:20.071767    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:20.099181    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:20.099207    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:20.109413    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:20.109426    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:20.137559    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:20.137571    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:20.149516    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:20.149531    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:20.161778    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:20.161788    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:20.198703    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:20.198717    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:20.234628    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:20.234639    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:22.751134    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:27.753794    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:27.754373    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:27.793263    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:27.793411    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:27.817287    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:27.817422    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:27.832392    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:27.832494    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:27.845752    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:27.845839    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:27.860176    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:27.860255    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:27.875178    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:27.875254    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:27.890637    5283 logs.go:276] 0 containers: []
	W0915 11:45:27.890652    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:27.890723    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:27.902152    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:27.902173    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:27.902178    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:27.914259    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:27.914270    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:27.925658    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:27.925670    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:27.963449    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:27.963465    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:27.968062    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:27.968068    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:27.979513    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:27.979524    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:27.996539    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:27.996554    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:28.031106    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:28.031122    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:28.051886    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:28.051914    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:28.075898    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:28.075907    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:28.091899    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:28.091914    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:28.103760    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:28.103774    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:28.115744    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:28.115756    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:28.126715    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:28.126725    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:28.140456    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:28.140468    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:28.158144    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:28.158156    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:28.188434    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:28.188443    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:30.708250    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:35.709410    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:35.709943    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:35.750973    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:35.751137    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:35.783458    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:35.783565    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:35.797932    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:35.798016    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:35.810685    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:35.810784    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:35.821895    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:35.821982    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:35.832915    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:35.833000    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:35.843268    5283 logs.go:276] 0 containers: []
	W0915 11:45:35.843281    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:35.843352    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:35.853956    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:35.853978    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:35.853984    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:35.866541    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:35.866552    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:35.880779    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:35.880792    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:35.892505    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:35.892517    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:35.927795    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:35.927806    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:35.942477    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:35.942488    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:35.979163    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:35.979182    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:35.985504    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:35.985516    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:36.004633    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:36.004648    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:36.026063    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:36.026080    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:36.046933    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:36.046947    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:36.060467    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:36.060479    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:36.073649    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:36.073664    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:36.086563    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:36.086575    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:36.099491    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:36.099502    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:36.124239    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:36.124255    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:36.143246    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:36.143259    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:38.657293    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:43.659474    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:43.659608    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:43.672216    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:43.672310    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:43.685732    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:43.685832    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:43.697839    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:43.697929    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:43.710343    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:43.710424    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:43.722795    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:43.722878    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:43.735354    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:43.735432    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:43.747891    5283 logs.go:276] 0 containers: []
	W0915 11:45:43.747909    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:43.747996    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:43.766220    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:43.766245    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:43.766252    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:43.785518    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:43.785532    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:43.799672    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:43.799686    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:43.827467    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:43.827481    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:43.866560    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:43.866571    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:43.902910    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:43.902922    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:43.921722    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:43.921733    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:43.933752    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:43.933763    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:43.946284    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:43.946296    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:43.957786    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:43.957798    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:43.962745    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:43.962752    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:43.982626    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:43.982635    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:43.994152    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:43.994162    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:44.006691    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:44.006705    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:44.019671    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:44.019685    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:44.035666    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:44.035681    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:44.049236    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:44.049249    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:46.570023    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:51.572464    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:51.573003    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:51.615080    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:51.615265    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:51.635906    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:51.636023    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:51.650876    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:51.650964    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:51.663014    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:51.663094    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:51.676978    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:51.677059    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:51.687136    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:51.687222    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:51.697357    5283 logs.go:276] 0 containers: []
	W0915 11:45:51.697369    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:51.697440    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:51.708249    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:51.708265    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:51.708270    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:51.745347    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:51.745356    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:51.759549    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:51.759564    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:51.779577    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:51.779588    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:51.796926    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:51.796938    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:51.807928    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:51.807940    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:51.819413    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:51.819427    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:51.853946    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:51.853960    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:51.867643    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:51.867656    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:51.885224    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:51.885240    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:51.896388    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:51.896398    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:51.911167    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:51.911182    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:51.915614    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:51.915619    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:51.927326    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:51.927338    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:45:51.938679    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:51.938692    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:51.949638    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:51.949648    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:51.961226    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:51.961237    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:54.486714    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:45:59.488860    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:45:59.488995    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:45:59.500464    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:45:59.500565    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:45:59.512380    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:45:59.512464    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:45:59.525362    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:45:59.525458    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:45:59.535773    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:45:59.535855    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:45:59.549939    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:45:59.550018    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:45:59.561137    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:45:59.561222    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:45:59.571980    5283 logs.go:276] 0 containers: []
	W0915 11:45:59.571991    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:45:59.572059    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:45:59.583213    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:45:59.583233    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:45:59.583238    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:45:59.597441    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:45:59.597452    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:45:59.609104    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:45:59.609115    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:45:59.622839    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:45:59.622850    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:45:59.640411    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:45:59.640424    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:45:59.653848    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:45:59.653861    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:45:59.667838    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:45:59.667851    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:45:59.681399    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:45:59.681412    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:45:59.724830    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:45:59.724844    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:45:59.742725    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:45:59.742736    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:45:59.754748    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:45:59.754759    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:45:59.779022    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:45:59.779031    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:45:59.815678    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:45:59.815694    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:45:59.820567    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:45:59.820578    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:45:59.841351    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:45:59.841368    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:45:59.855005    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:45:59.855018    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:45:59.868962    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:45:59.868974    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:02.383706    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:07.384452    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:07.384590    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:07.395829    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:07.395926    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:07.409590    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:07.409681    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:07.421436    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:07.421521    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:07.432657    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:07.432757    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:07.444933    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:07.445026    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:07.461446    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:07.461528    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:07.476278    5283 logs.go:276] 0 containers: []
	W0915 11:46:07.476291    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:07.476371    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:07.487601    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:07.487621    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:07.487627    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:07.505955    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:07.505973    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:07.521358    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:07.521373    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:07.538864    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:07.538879    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:07.551592    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:07.551603    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:07.576738    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:07.576752    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:07.589606    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:07.589622    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:07.594407    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:07.594422    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:07.616302    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:07.616319    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:07.628213    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:07.628226    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:07.642559    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:07.642573    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:07.655657    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:07.655672    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:07.668499    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:07.668512    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:07.707721    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:07.707737    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:07.742616    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:07.742633    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:07.756500    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:07.756511    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:07.773735    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:07.773745    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:10.287708    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:15.289875    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:15.290045    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:15.311189    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:15.311299    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:15.324760    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:15.324849    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:15.335199    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:15.335284    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:15.345849    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:15.345924    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:15.357704    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:15.357788    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:15.368270    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:15.368360    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:15.378313    5283 logs.go:276] 0 containers: []
	W0915 11:46:15.378329    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:15.378393    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:15.388955    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:15.388973    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:15.388979    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:15.425077    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:15.425091    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:15.442731    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:15.442742    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:15.458420    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:15.458435    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:15.470973    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:15.470985    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:15.495309    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:15.495317    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:15.529188    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:15.529200    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:15.541040    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:15.541049    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:15.559173    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:15.559186    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:15.570590    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:15.570608    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:15.582310    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:15.582326    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:15.596262    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:15.596273    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:15.610686    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:15.610697    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:15.624695    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:15.624705    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:15.636157    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:15.636169    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:15.647814    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:15.647825    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:15.651943    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:15.651948    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:18.174118    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:23.176735    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:23.176925    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:23.191715    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:23.191798    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:23.205191    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:23.205274    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:23.216023    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:23.216104    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:23.226329    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:23.226412    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:23.237644    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:23.237723    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:23.249036    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:23.249124    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:23.260814    5283 logs.go:276] 0 containers: []
	W0915 11:46:23.260826    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:23.260910    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:23.271396    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:23.271416    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:23.271422    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:23.292565    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:23.292575    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:23.297037    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:23.297044    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:23.310859    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:23.310872    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:23.328625    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:23.328635    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:23.341974    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:23.341986    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:23.354164    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:23.354178    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:23.365716    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:23.365728    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:23.392082    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:23.392092    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:23.403684    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:23.403696    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:23.415983    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:23.415996    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:23.427905    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:23.427916    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:23.445699    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:23.445710    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:23.457746    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:23.457758    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:23.494752    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:23.494763    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:23.529962    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:23.529976    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:23.554565    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:23.554573    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:26.068640    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:31.071190    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:31.071321    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:31.082932    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:31.083022    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:31.094272    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:31.094369    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:31.105436    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:31.105511    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:31.116196    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:31.116280    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:31.127420    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:31.127503    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:31.138331    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:31.138407    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:31.151702    5283 logs.go:276] 0 containers: []
	W0915 11:46:31.151715    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:31.151773    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:31.166635    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:31.166652    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:31.166658    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:31.178752    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:31.178764    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:31.191147    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:31.191161    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:31.215838    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:31.215845    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:31.235124    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:31.235133    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:31.239726    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:31.239733    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:31.254536    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:31.254550    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:31.267416    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:31.267427    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:31.278913    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:31.278926    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:31.316666    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:31.316675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:31.330794    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:31.330804    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:31.351119    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:31.351130    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:31.363936    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:31.363951    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:31.387332    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:31.387342    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:31.399602    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:31.399613    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:31.438407    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:31.438422    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:31.456848    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:31.456859    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:33.970918    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:38.973273    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:38.973831    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:39.012843    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:39.013018    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:39.034915    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:39.035014    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:39.057775    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:39.057869    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:39.068795    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:39.068868    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:39.079235    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:39.079322    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:39.090343    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:39.090428    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:39.105047    5283 logs.go:276] 0 containers: []
	W0915 11:46:39.105062    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:39.105134    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:39.115227    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:39.115247    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:39.115253    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:39.119646    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:39.119655    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:39.159227    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:39.159240    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:39.171199    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:39.171211    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:39.185819    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:39.185832    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:39.198527    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:39.198539    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:39.212929    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:39.212945    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:39.226663    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:39.226675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:39.238431    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:39.238444    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:39.276248    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:39.276261    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:39.297767    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:39.297777    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:39.311170    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:39.311182    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:39.322284    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:39.322294    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:39.333732    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:39.333745    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:39.351018    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:39.351031    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:39.362686    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:39.362699    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:39.381088    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:39.381101    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:41.905772    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:46.908326    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:46.908430    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:46.921144    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:46.921232    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:46.931841    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:46.931925    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:46.943663    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:46.943751    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:46.954650    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:46.954736    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:46.965482    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:46.965567    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:46.976271    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:46.976354    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:46.987061    5283 logs.go:276] 0 containers: []
	W0915 11:46:46.987075    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:46.987150    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:46.997967    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:46.997984    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:46.997990    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:47.015999    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:47.016010    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:47.028314    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:47.028329    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:47.040769    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:47.040781    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:47.056217    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:47.056233    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:47.068114    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:47.068126    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:47.107113    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:47.107133    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:47.112181    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:47.112193    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:47.148660    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:47.148677    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:47.169586    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:47.169600    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:47.184885    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:47.184902    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:47.201546    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:47.201558    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:47.222044    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:47.222057    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:47.246387    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:47.246404    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:47.268921    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:47.268930    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:47.281332    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:47.281346    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:47.293624    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:47.293640    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:49.808701    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:54.810468    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:54.810581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:54.822215    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:54.822299    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:54.832700    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:54.832793    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:54.842988    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:54.843071    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:54.853861    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:54.853944    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:54.863973    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:54.864042    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:54.874995    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:54.875074    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:54.885495    5283 logs.go:276] 0 containers: []
	W0915 11:46:54.885509    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:54.885571    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:54.900175    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:54.900193    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:54.900199    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:54.920207    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:54.920217    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:54.932362    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:54.932374    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:54.944060    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:54.944070    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:54.958458    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:54.958468    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:54.969660    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:54.969671    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:54.981385    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:54.981402    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:55.016813    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:55.016824    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:55.020978    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:55.020987    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:55.058064    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:55.058077    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:55.072147    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:55.072160    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:55.086766    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:55.086779    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:55.114879    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:55.114892    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:55.132074    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:55.132084    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:55.156091    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:55.156102    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:55.173627    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:55.173640    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:55.187315    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:55.187324    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:57.701322    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:02.703618    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:02.703893    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:02.729625    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:47:02.729767    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:02.745765    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:47:02.745874    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:02.758848    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:47:02.758935    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:02.770500    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:47:02.770581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:02.780617    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:47:02.780702    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:02.791041    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:47:02.791122    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:02.800804    5283 logs.go:276] 0 containers: []
	W0915 11:47:02.800816    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:02.800890    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:02.811786    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:47:02.811805    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:02.811813    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:02.835413    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:47:02.835420    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:47:02.852452    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:47:02.852462    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:02.865012    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:02.865024    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:02.901661    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:02.901672    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:02.937532    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:47:02.937544    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:47:02.952052    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:47:02.952063    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:47:02.973200    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:47:02.973215    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:47:02.983989    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:47:02.984002    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:47:02.998104    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:47:02.998114    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:47:03.015711    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:47:03.015724    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:47:03.027729    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:47:03.027741    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:47:03.039089    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:47:03.039101    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:47:03.051825    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:03.051836    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:03.056562    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:47:03.056572    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:47:03.068410    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:47:03.068421    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:47:03.080367    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:47:03.080381    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:47:05.592779    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:10.595098    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:10.595215    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:10.606903    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:47:10.606990    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:10.617798    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:47:10.617884    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:10.628800    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:47:10.628882    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:10.639822    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:47:10.639912    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:10.650538    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:47:10.650624    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:10.661652    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:47:10.661735    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:10.672099    5283 logs.go:276] 0 containers: []
	W0915 11:47:10.672111    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:10.672186    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:10.683269    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:47:10.683290    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:10.683297    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:10.719399    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:47:10.719410    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:47:10.759134    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:47:10.759145    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:47:10.771357    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:47:10.771368    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:47:10.783039    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:10.783049    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:10.821204    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:10.821217    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:10.825531    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:47:10.825538    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:47:10.843390    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:47:10.843401    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:47:10.861201    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:47:10.861212    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:10.873267    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:47:10.873279    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:47:10.887374    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:47:10.887386    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:47:10.901236    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:47:10.901247    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:47:10.913026    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:10.913038    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:10.934734    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:47:10.934744    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:47:10.946201    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:47:10.946215    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:47:10.958310    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:47:10.958322    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:47:10.969827    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:47:10.969841    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:47:13.483943    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:18.486208    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:18.486289    5283 kubeadm.go:597] duration metric: took 4m4.210233792s to restartPrimaryControlPlane
	W0915 11:47:18.486357    5283 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 11:47:18.486387    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0915 11:47:19.457201    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 11:47:19.462343    5283 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:47:19.465104    5283 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:47:19.467878    5283 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:47:19.467885    5283 kubeadm.go:157] found existing configuration files:
	
	I0915 11:47:19.467914    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf
	I0915 11:47:19.470904    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:47:19.470935    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:47:19.473879    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf
	I0915 11:47:19.476379    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:47:19.476408    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:47:19.479485    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf
	I0915 11:47:19.482587    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:47:19.482611    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:47:19.485139    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf
	I0915 11:47:19.487816    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:47:19.487840    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:47:19.491168    5283 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 11:47:19.507949    5283 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0915 11:47:19.507977    5283 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 11:47:19.558293    5283 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 11:47:19.558349    5283 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 11:47:19.558448    5283 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 11:47:19.610438    5283 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 11:47:19.613586    5283 out.go:235]   - Generating certificates and keys ...
	I0915 11:47:19.613646    5283 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 11:47:19.613680    5283 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 11:47:19.613730    5283 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 11:47:19.613760    5283 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 11:47:19.613795    5283 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 11:47:19.613831    5283 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 11:47:19.613866    5283 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 11:47:19.613897    5283 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 11:47:19.613934    5283 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 11:47:19.613979    5283 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 11:47:19.614010    5283 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 11:47:19.614042    5283 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 11:47:19.652403    5283 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 11:47:19.839426    5283 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 11:47:19.984038    5283 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 11:47:20.021112    5283 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 11:47:20.049757    5283 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 11:47:20.050149    5283 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 11:47:20.050176    5283 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 11:47:20.144301    5283 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 11:47:20.148466    5283 out.go:235]   - Booting up control plane ...
	I0915 11:47:20.148521    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 11:47:20.148558    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 11:47:20.148613    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 11:47:20.148667    5283 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 11:47:20.148763    5283 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 11:47:25.151381    5283 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002954 seconds
	I0915 11:47:25.151475    5283 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 11:47:25.156791    5283 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 11:47:25.666417    5283 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 11:47:25.666547    5283 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-196000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 11:47:26.170081    5283 kubeadm.go:310] [bootstrap-token] Using token: sxdjwk.a6hmvxcjy86judm9
	I0915 11:47:26.175792    5283 out.go:235]   - Configuring RBAC rules ...
	I0915 11:47:26.175845    5283 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 11:47:26.175884    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 11:47:26.180336    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 11:47:26.181181    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 11:47:26.182002    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 11:47:26.182748    5283 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 11:47:26.186881    5283 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 11:47:26.347235    5283 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 11:47:26.574090    5283 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 11:47:26.574521    5283 kubeadm.go:310] 
	I0915 11:47:26.574553    5283 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 11:47:26.574556    5283 kubeadm.go:310] 
	I0915 11:47:26.574592    5283 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 11:47:26.574601    5283 kubeadm.go:310] 
	I0915 11:47:26.574617    5283 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 11:47:26.574653    5283 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 11:47:26.574682    5283 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 11:47:26.574685    5283 kubeadm.go:310] 
	I0915 11:47:26.574711    5283 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 11:47:26.574714    5283 kubeadm.go:310] 
	I0915 11:47:26.574736    5283 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 11:47:26.574742    5283 kubeadm.go:310] 
	I0915 11:47:26.574771    5283 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 11:47:26.574807    5283 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 11:47:26.574845    5283 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 11:47:26.574849    5283 kubeadm.go:310] 
	I0915 11:47:26.574901    5283 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 11:47:26.574948    5283 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 11:47:26.574952    5283 kubeadm.go:310] 
	I0915 11:47:26.575011    5283 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sxdjwk.a6hmvxcjy86judm9 \
	I0915 11:47:26.575103    5283 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd \
	I0915 11:47:26.575116    5283 kubeadm.go:310] 	--control-plane 
	I0915 11:47:26.575118    5283 kubeadm.go:310] 
	I0915 11:47:26.575154    5283 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 11:47:26.575162    5283 kubeadm.go:310] 
	I0915 11:47:26.575202    5283 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sxdjwk.a6hmvxcjy86judm9 \
	I0915 11:47:26.575269    5283 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd 
	I0915 11:47:26.575329    5283 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 11:47:26.575338    5283 cni.go:84] Creating CNI manager for ""
	I0915 11:47:26.575346    5283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:47:26.578250    5283 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 11:47:26.584219    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 11:47:26.587664    5283 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 11:47:26.592622    5283 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 11:47:26.592676    5283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 11:47:26.592694    5283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-196000 minikube.k8s.io/updated_at=2024_09_15T11_47_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=running-upgrade-196000 minikube.k8s.io/primary=true
	I0915 11:47:26.633105    5283 ops.go:34] apiserver oom_adj: -16
	I0915 11:47:26.633103    5283 kubeadm.go:1113] duration metric: took 40.475291ms to wait for elevateKubeSystemPrivileges
	I0915 11:47:26.633202    5283 kubeadm.go:394] duration metric: took 4m12.370834958s to StartCluster
	I0915 11:47:26.633216    5283 settings.go:142] acquiring lock: {Name:mke41fab1fd2ef0229fde23400affd11462eeb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:47:26.633312    5283 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:47:26.633685    5283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:47:26.633907    5283 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:47:26.633919    5283 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 11:47:26.633970    5283 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-196000"
	I0915 11:47:26.633977    5283 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-196000"
	I0915 11:47:26.633995    5283 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-196000"
	I0915 11:47:26.634004    5283 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-196000"
	W0915 11:47:26.634008    5283 addons.go:243] addon storage-provisioner should already be in state true
	I0915 11:47:26.634018    5283 host.go:66] Checking if "running-upgrade-196000" exists ...
	I0915 11:47:26.634062    5283 config.go:182] Loaded profile config "running-upgrade-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:47:26.634884    5283 kapi.go:59] client config for running-upgrade-196000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ced800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:47:26.634998    5283 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-196000"
	W0915 11:47:26.635003    5283 addons.go:243] addon default-storageclass should already be in state true
	I0915 11:47:26.635018    5283 host.go:66] Checking if "running-upgrade-196000" exists ...
	I0915 11:47:26.638214    5283 out.go:177] * Verifying Kubernetes components...
	I0915 11:47:26.638509    5283 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 11:47:26.642309    5283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 11:47:26.642316    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:47:26.646156    5283 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:47:26.650229    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:47:26.654153    5283 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:47:26.654159    5283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 11:47:26.654164    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:47:26.737784    5283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:47:26.742592    5283 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:47:26.742640    5283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:47:26.746437    5283 api_server.go:72] duration metric: took 112.521167ms to wait for apiserver process to appear ...
	I0915 11:47:26.746444    5283 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:47:26.746450    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:26.750966    5283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 11:47:26.776208    5283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:47:27.109210    5283 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 11:47:27.109222    5283 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 11:47:31.748522    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:31.748562    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:36.748860    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:36.748907    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:41.749608    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:41.749632    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:46.750112    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:46.750170    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:51.750946    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:51.750984    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:56.751872    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:56.751896    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0915 11:47:57.111226    5283 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0915 11:47:57.114922    5283 out.go:177] * Enabled addons: storage-provisioner
	I0915 11:47:57.122927    5283 addons.go:510] duration metric: took 30.489387541s for enable addons: enabled=[storage-provisioner]
	I0915 11:48:01.753269    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:01.753300    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:06.754817    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:06.754858    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:11.756796    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:11.756822    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:16.758920    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:16.758941    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:21.761081    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:21.761139    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:26.762155    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:26.762315    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:26.772718    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:26.772803    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:26.783912    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:26.783985    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:26.794985    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:26.795070    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:26.806144    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:26.806229    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:26.822865    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:26.822940    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:26.833592    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:26.833671    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:26.844089    5283 logs.go:276] 0 containers: []
	W0915 11:48:26.844101    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:26.844179    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:26.854532    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:26.854547    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:26.854554    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:26.871772    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:26.871782    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:26.882922    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:26.882932    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:26.917808    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:26.917818    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:26.922258    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:26.922263    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:26.956910    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:26.956925    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:26.971364    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:26.971376    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:26.986285    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:26.986296    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:26.997976    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:26.997988    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:27.009282    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:27.009295    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:27.023742    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:27.023755    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:27.034985    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:27.034997    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:27.046910    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:27.046922    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:29.572176    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:34.574467    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:34.574618    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:34.586789    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:34.586876    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:34.598187    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:34.598270    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:34.612182    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:34.612267    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:34.623077    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:34.623159    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:34.633781    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:34.633855    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:34.644422    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:34.644503    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:34.654384    5283 logs.go:276] 0 containers: []
	W0915 11:48:34.654397    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:34.654468    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:34.664711    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:34.664730    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:34.664735    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:34.678179    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:34.678190    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:34.711706    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:34.711717    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:34.725835    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:34.725851    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:34.741859    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:34.741874    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:34.753435    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:34.753444    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:34.767955    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:34.767965    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:34.779443    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:34.779452    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:34.791196    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:34.791211    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:34.795532    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:34.795538    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:34.833926    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:34.833937    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:34.847845    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:34.847857    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:34.865602    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:34.865613    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:37.390991    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:42.393373    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:42.393534    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:42.404907    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:42.404995    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:42.415195    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:42.415273    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:42.426062    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:42.426147    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:42.437709    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:42.437795    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:42.448202    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:42.448287    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:42.458268    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:42.458346    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:42.468360    5283 logs.go:276] 0 containers: []
	W0915 11:48:42.468372    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:42.468447    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:42.479058    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:42.479076    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:42.479083    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:42.490521    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:42.490533    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:42.495024    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:42.495033    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:42.534010    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:42.534021    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:42.549131    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:42.549142    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:42.561298    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:42.561309    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:42.579265    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:42.579277    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:42.604946    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:42.604959    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:42.616713    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:42.616730    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:42.652457    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:42.652466    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:42.674378    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:42.674390    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:42.686093    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:42.686106    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:42.702109    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:42.702120    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:45.219764    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:50.222015    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:50.222116    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:50.235482    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:50.235561    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:50.246517    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:50.246606    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:50.257612    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:50.257700    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:50.268352    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:50.268430    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:50.278583    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:50.278674    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:50.288952    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:50.289023    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:50.299283    5283 logs.go:276] 0 containers: []
	W0915 11:48:50.299297    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:50.299370    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:50.310089    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:50.310106    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:50.310111    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:50.321708    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:50.321719    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:50.346387    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:50.346396    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:50.380609    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:50.380618    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:50.385235    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:50.385244    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:50.398848    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:50.398857    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:50.411029    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:50.411039    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:50.422991    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:50.423002    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:50.442283    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:50.442293    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:50.459942    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:50.459952    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:50.471448    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:50.471457    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:50.514120    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:50.514132    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:50.529108    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:50.529122    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:53.050037    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:58.052307    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:58.052423    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:58.064453    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:58.064554    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:58.075971    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:58.076052    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:58.087463    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:58.087542    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:58.098375    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:58.098458    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:58.110580    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:58.110669    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:58.122082    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:58.122166    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:58.133196    5283 logs.go:276] 0 containers: []
	W0915 11:48:58.133209    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:58.133283    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:58.145416    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:58.145434    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:58.145440    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:58.158323    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:58.158335    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:58.178464    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:58.178476    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:58.190855    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:58.190866    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:58.226342    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:58.226353    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:58.240536    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:58.240545    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:58.251844    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:58.251859    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:58.269041    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:58.269053    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:58.282483    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:58.282498    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:58.306107    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:58.306120    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:58.318062    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:58.318074    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:58.350723    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:58.350733    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:58.355002    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:58.355010    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:00.869237    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:05.871429    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:05.871523    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:05.882154    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:05.882233    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:05.896622    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:05.896709    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:05.909487    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:05.909571    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:05.920998    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:05.921126    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:05.939544    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:05.939627    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:05.954293    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:05.954376    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:05.967238    5283 logs.go:276] 0 containers: []
	W0915 11:49:05.967251    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:05.967330    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:05.984723    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:05.984738    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:05.984744    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:05.997304    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:05.997316    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:06.013386    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:06.013397    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:06.026716    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:06.026730    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:06.045647    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:06.045658    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:06.059714    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:06.059726    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:06.074869    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:06.074880    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:06.089149    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:06.089162    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:06.135567    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:06.135583    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:06.147493    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:06.147506    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:06.159131    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:06.159143    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:06.184252    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:06.184262    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:06.219291    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:06.219300    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:08.725674    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:13.727848    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:13.727959    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:13.739385    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:13.739471    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:13.750581    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:13.750662    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:13.761712    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:13.761804    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:13.773312    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:13.773401    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:13.785399    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:13.785484    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:13.797151    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:13.797235    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:13.808303    5283 logs.go:276] 0 containers: []
	W0915 11:49:13.808318    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:13.808398    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:13.820065    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:13.820082    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:13.820088    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:13.832499    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:13.832514    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:13.845220    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:13.845231    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:13.864969    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:13.864982    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:13.877317    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:13.877328    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:13.903007    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:13.903015    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:13.939602    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:13.939620    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:13.944534    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:13.944541    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:13.958917    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:13.958929    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:13.974976    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:13.974987    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:13.987920    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:13.987934    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:14.019126    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:14.019141    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:14.056390    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:14.056401    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:16.572310    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:21.574459    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:21.574561    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:21.588171    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:21.588258    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:21.599640    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:21.599727    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:21.610830    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:21.610917    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:21.622341    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:21.622431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:21.633785    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:21.633871    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:21.645021    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:21.645103    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:21.656778    5283 logs.go:276] 0 containers: []
	W0915 11:49:21.656791    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:21.656863    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:21.668211    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:21.668227    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:21.668233    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:21.694911    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:21.694924    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:21.731740    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:21.731751    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:21.736754    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:21.736763    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:21.752254    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:21.752272    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:21.769218    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:21.769229    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:21.786282    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:21.786300    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:21.805783    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:21.805792    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:21.823858    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:21.823868    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:21.837233    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:21.837245    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:21.874957    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:21.874970    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:21.890392    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:21.890403    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:21.906361    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:21.906373    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:24.419485    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:29.421797    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:29.422083    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:29.442508    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:29.442621    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:29.458065    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:29.458159    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:29.471015    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:29.471106    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:29.482736    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:29.482825    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:29.494094    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:29.494173    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:29.505761    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:29.505842    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:29.516709    5283 logs.go:276] 0 containers: []
	W0915 11:49:29.516721    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:29.516795    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:29.528721    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:29.528737    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:29.528743    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:29.533879    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:29.533891    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:29.574630    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:29.574646    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:29.587119    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:29.587130    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:29.605719    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:29.605732    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:29.618095    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:29.618107    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:29.631318    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:29.631331    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:29.648256    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:29.648270    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:29.675340    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:29.675359    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:29.711540    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:29.711555    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:29.727088    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:29.727104    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:29.742045    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:29.742060    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:29.754348    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:29.754362    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:32.271730    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:37.274050    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:37.274595    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:37.313961    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:37.314136    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:37.336380    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:37.336510    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:37.354339    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:37.354431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:37.366243    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:37.366312    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:37.377871    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:37.377925    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:37.389129    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:37.389177    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:37.399938    5283 logs.go:276] 0 containers: []
	W0915 11:49:37.399950    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:37.400026    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:37.411064    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:37.411083    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:37.411090    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:37.451628    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:37.451640    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:37.466444    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:37.466456    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:37.479681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:37.479694    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:37.492081    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:37.492097    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:37.504287    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:37.504300    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:37.517638    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:37.517650    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:37.555853    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:37.555871    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:37.561325    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:37.561341    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:37.576510    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:37.576526    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:37.589271    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:37.589284    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:37.605459    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:37.605473    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:37.639361    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:37.639376    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:40.169914    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:45.175199    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:45.175684    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:45.208867    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:45.209019    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:45.227583    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:45.227690    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:45.242593    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:45.242687    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:45.254232    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:45.254314    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:45.264402    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:45.264472    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:45.274921    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:45.275003    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:45.284866    5283 logs.go:276] 0 containers: []
	W0915 11:49:45.284878    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:45.284953    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:45.296467    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:45.296487    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:49:45.296493    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:49:45.314582    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:45.314591    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:45.327195    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:45.327208    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:45.339949    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:45.339961    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:45.358326    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:45.358338    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:45.363263    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:45.363274    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:45.405984    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:45.405995    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:45.421453    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:49:45.421465    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:49:45.435757    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:45.435771    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:45.448570    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:45.448581    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:45.475165    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:45.475174    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:45.490945    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:45.490955    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:45.528070    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:45.528085    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:45.547544    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:45.547553    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:45.560583    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:45.560595    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:48.083925    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:53.090173    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:53.090559    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:53.125120    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:53.125282    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:53.149929    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:53.150020    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:53.163253    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:53.163347    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:53.178071    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:53.178148    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:53.188701    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:53.188778    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:53.203483    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:53.203562    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:53.213939    5283 logs.go:276] 0 containers: []
	W0915 11:49:53.213950    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:53.214021    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:53.230147    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:53.230168    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:53.230174    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:53.235828    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:49:53.235840    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:49:53.248884    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:53.248896    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:53.261613    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:53.261627    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:53.283061    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:53.283072    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:53.295610    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:53.295620    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:53.310959    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:53.310972    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:53.327141    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:49:53.327159    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:49:53.350131    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:53.350144    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:53.370406    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:53.370424    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:53.397878    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:53.397893    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:53.434795    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:53.434806    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:53.472176    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:53.472184    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:53.484848    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:53.484860    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:53.500219    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:53.500237    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:56.018484    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:01.023360    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:01.023890    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:01.058499    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:01.058665    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:01.078113    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:01.078226    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:01.092609    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:01.092707    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:01.108995    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:01.109072    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:01.123999    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:01.124088    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:01.135234    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:01.135319    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:01.146230    5283 logs.go:276] 0 containers: []
	W0915 11:50:01.146240    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:01.146282    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:01.157508    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:01.157522    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:01.157527    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:01.162316    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:01.162328    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:01.178161    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:01.178175    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:01.196893    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:01.196911    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:01.209919    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:01.209932    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:01.237054    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:01.237074    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:01.274065    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:01.274082    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:01.289742    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:01.289757    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:01.302855    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:01.302869    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:01.315492    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:01.315506    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:01.329730    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:01.329745    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:01.343704    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:01.343717    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:01.380960    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:01.380971    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:01.396590    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:01.396606    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:01.409065    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:01.409076    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:03.924043    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:08.927690    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:08.927856    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:08.941861    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:08.941954    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:08.954078    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:08.954157    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:08.964756    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:08.964849    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:08.975383    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:08.975469    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:08.985744    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:08.985823    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:08.996432    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:08.996508    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:09.006872    5283 logs.go:276] 0 containers: []
	W0915 11:50:09.006884    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:09.006951    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:09.017577    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:09.017600    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:09.017605    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:09.034253    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:09.034263    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:09.046138    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:09.046149    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:09.061881    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:09.061894    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:09.074644    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:09.074658    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:09.087254    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:09.087268    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:09.129961    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:09.129975    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:09.155193    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:09.155206    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:09.160069    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:09.160081    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:09.174959    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:09.174971    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:09.192171    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:09.192179    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:09.204438    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:09.204452    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:09.242596    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:09.242615    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:09.259405    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:09.259419    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:09.276789    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:09.276804    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:11.791095    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:16.794185    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:16.794310    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:16.805594    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:16.805679    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:16.816522    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:16.816603    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:16.827292    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:16.827374    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:16.838027    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:16.838108    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:16.848967    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:16.849043    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:16.859471    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:16.859562    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:16.869369    5283 logs.go:276] 0 containers: []
	W0915 11:50:16.869380    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:16.869449    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:16.880431    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:16.880448    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:16.880454    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:16.885129    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:16.885137    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:16.899495    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:16.899509    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:16.911323    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:16.911341    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:16.947534    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:16.947548    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:16.969050    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:16.969066    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:16.981810    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:16.981821    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:16.995254    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:16.995267    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:17.021078    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:17.021091    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:17.056557    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:17.056568    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:17.068400    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:17.068412    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:17.082470    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:17.082483    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:17.095283    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:17.095296    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:17.110960    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:17.110972    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:17.123930    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:17.123944    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:19.645503    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:24.647645    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:24.647774    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:24.658800    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:24.658894    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:24.673868    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:24.673958    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:24.684702    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:24.684788    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:24.695396    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:24.695477    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:24.705896    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:24.705970    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:24.716391    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:24.716469    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:24.727000    5283 logs.go:276] 0 containers: []
	W0915 11:50:24.727011    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:24.727080    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:24.737526    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:24.737544    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:24.737550    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:24.749229    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:24.749244    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:24.760876    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:24.760889    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:24.776652    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:24.776663    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:24.794522    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:24.794532    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:24.819509    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:24.819520    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:24.859961    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:24.859977    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:24.872681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:24.872693    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:24.885906    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:24.885919    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:24.898811    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:24.898826    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:24.903705    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:24.903717    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:24.918948    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:24.918958    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:24.954757    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:24.954770    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:24.970352    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:24.970365    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:24.985314    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:24.985326    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:27.500126    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:32.503045    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:32.503257    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:32.518839    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:32.518940    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:32.531320    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:32.531408    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:32.542497    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:32.542581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:32.556962    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:32.557046    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:32.570440    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:32.570521    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:32.581380    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:32.581464    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:32.592019    5283 logs.go:276] 0 containers: []
	W0915 11:50:32.592030    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:32.592098    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:32.602679    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:32.602701    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:32.602706    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:32.618693    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:32.618704    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:32.631136    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:32.631152    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:32.656619    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:32.656628    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:32.691317    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:32.691326    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:32.695599    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:32.695608    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:32.716760    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:32.716773    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:32.752670    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:32.752682    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:32.764498    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:32.764510    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:32.776875    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:32.776887    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:32.799948    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:32.799959    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:32.818748    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:32.818760    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:32.831837    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:32.831849    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:32.847831    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:32.847841    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:32.864065    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:32.864079    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:35.382362    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:40.384940    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:40.385384    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:40.417101    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:40.417258    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:40.435884    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:40.435990    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:40.459419    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:40.459509    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:40.470627    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:40.470712    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:40.480881    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:40.480954    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:40.492870    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:40.492955    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:40.506782    5283 logs.go:276] 0 containers: []
	W0915 11:50:40.506793    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:40.506868    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:40.517042    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:40.517061    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:40.517066    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:40.534538    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:40.534547    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:40.552131    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:40.552141    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:40.577605    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:40.577618    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:40.612519    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:40.612534    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:40.624705    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:40.624716    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:40.639995    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:40.640006    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:40.651259    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:40.651270    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:40.655817    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:40.655826    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:40.694014    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:40.694036    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:40.715611    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:40.715625    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:40.729051    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:40.729065    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:40.741730    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:40.741743    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:40.754816    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:40.754828    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:40.767572    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:40.767584    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:43.283638    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:48.286075    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:48.286421    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:48.312446    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:48.312580    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:48.335658    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:48.335750    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:48.348462    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:48.348559    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:48.367107    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:48.367196    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:48.377451    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:48.377538    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:48.388077    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:48.388162    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:48.398765    5283 logs.go:276] 0 containers: []
	W0915 11:50:48.398777    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:48.398844    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:48.408848    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:48.408864    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:48.408869    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:48.420509    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:48.420521    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:48.438495    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:48.438508    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:48.451798    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:48.451812    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:48.485627    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:48.485637    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:48.519804    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:48.519819    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:48.531661    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:48.531675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:48.544153    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:48.544163    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:48.548499    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:48.548507    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:48.560324    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:48.560333    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:48.576372    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:48.576385    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:48.600710    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:48.600725    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:48.616240    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:48.616257    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:48.635052    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:48.635065    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:48.651510    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:48.651523    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:51.166262    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:56.168672    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:56.168945    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:56.198501    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:56.198625    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:56.218454    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:56.218545    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:56.231037    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:56.231133    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:56.242804    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:56.242891    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:56.253574    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:56.253649    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:56.265054    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:56.265151    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:56.276354    5283 logs.go:276] 0 containers: []
	W0915 11:50:56.276367    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:56.276433    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:56.287385    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:56.287406    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:56.287412    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:56.299419    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:56.299431    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:56.311804    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:56.311817    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:56.324481    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:56.324496    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:56.340227    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:56.340240    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:56.352141    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:56.352154    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:56.392557    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:56.392573    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:56.407008    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:56.407020    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:56.421085    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:56.421096    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:56.433404    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:56.433415    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:56.438059    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:56.438068    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:56.450406    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:56.450419    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:56.473934    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:56.473947    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:56.510254    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:56.510277    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:56.522977    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:56.522991    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:59.045152    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:04.047438    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:04.047565    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:04.058531    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:04.058610    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:04.070021    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:04.070104    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:04.084006    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:04.084092    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:04.095631    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:04.095721    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:04.106953    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:04.107045    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:04.117748    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:04.117835    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:04.128750    5283 logs.go:276] 0 containers: []
	W0915 11:51:04.128763    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:04.128839    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:04.139866    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:04.139885    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:04.139893    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:04.175842    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:04.175854    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:04.190594    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:04.190604    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:04.202827    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:04.202838    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:04.214798    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:04.214809    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:04.232972    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:04.232985    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:04.246895    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:04.246908    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:04.260461    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:04.260474    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:04.278728    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:04.278741    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:04.291036    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:04.291056    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:04.295727    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:04.295739    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:04.311275    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:04.311290    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:04.336980    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:04.336997    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:04.375578    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:04.375593    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:04.391036    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:04.391048    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:06.905117    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:11.905651    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:11.905817    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:11.918708    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:11.918796    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:11.929552    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:11.929632    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:11.948278    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:11.948367    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:11.959186    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:11.959273    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:11.969580    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:11.969661    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:11.980742    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:11.980827    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:11.991349    5283 logs.go:276] 0 containers: []
	W0915 11:51:11.991360    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:11.991431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:12.001573    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:12.001590    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:12.001597    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:12.038558    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:12.038572    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:12.050620    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:12.050632    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:12.065672    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:12.065687    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:12.083163    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:12.083177    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:12.106303    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:12.106313    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:12.139852    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:12.139860    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:12.151349    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:12.151361    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:12.163144    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:12.163155    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:12.174724    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:12.174736    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:12.186621    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:12.186632    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:12.201001    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:12.201012    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:12.212739    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:12.212755    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:12.223887    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:12.223896    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:12.228473    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:12.228483    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:14.743334    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:19.745038    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:19.745178    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:19.758487    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:19.758585    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:19.775192    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:19.775279    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:19.786242    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:19.786322    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:19.796929    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:19.796995    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:19.807569    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:19.807647    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:19.818900    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:19.818982    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:19.829711    5283 logs.go:276] 0 containers: []
	W0915 11:51:19.829729    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:19.829802    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:19.844658    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:19.844676    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:19.844682    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:19.879206    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:19.879216    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:19.891365    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:19.891376    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:19.903058    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:19.903073    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:19.936714    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:19.936725    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:19.947992    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:19.948008    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:19.970544    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:19.970553    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:19.975315    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:19.975323    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:19.988806    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:19.988815    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:20.009436    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:20.009450    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:20.023837    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:20.023848    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:20.038157    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:20.038170    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:20.050556    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:20.050571    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:20.062681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:20.062697    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:20.074689    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:20.074700    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:22.594987    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:27.597299    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:27.600621    5283 out.go:201] 
	W0915 11:51:27.603622    5283 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0915 11:51:27.603627    5283 out.go:270] * 
	* 
	W0915 11:51:27.604048    5283 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:51:27.615426    5283 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-196000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-15 11:51:27.715316 -0700 PDT m=+3353.550937751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-196000 -n running-upgrade-196000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-196000 -n running-upgrade-196000: exit status 2 (15.591072333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-196000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-530000          | force-systemd-flag-530000 | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-014000              | force-systemd-env-014000  | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-014000           | force-systemd-env-014000  | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT | 15 Sep 24 11:41 PDT |
	| start   | -p docker-flags-824000                | docker-flags-824000       | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-530000             | force-systemd-flag-530000 | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-530000          | force-systemd-flag-530000 | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT | 15 Sep 24 11:41 PDT |
	| start   | -p cert-expiration-621000             | cert-expiration-621000    | jenkins | v1.34.0 | 15 Sep 24 11:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-824000 ssh               | docker-flags-824000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-824000 ssh               | docker-flags-824000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-824000                | docker-flags-824000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT | 15 Sep 24 11:42 PDT |
	| start   | -p cert-options-255000                | cert-options-255000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-255000 ssh               | cert-options-255000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-255000 -- sudo        | cert-options-255000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-255000                | cert-options-255000       | jenkins | v1.34.0 | 15 Sep 24 11:42 PDT | 15 Sep 24 11:42 PDT |
	| start   | -p running-upgrade-196000             | minikube                  | jenkins | v1.26.0 | 15 Sep 24 11:42 PDT | 15 Sep 24 11:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-196000             | running-upgrade-196000    | jenkins | v1.34.0 | 15 Sep 24 11:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-621000             | cert-expiration-621000    | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-621000             | cert-expiration-621000    | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT | 15 Sep 24 11:45 PDT |
	| start   | -p kubernetes-upgrade-902000          | kubernetes-upgrade-902000 | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-902000          | kubernetes-upgrade-902000 | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT | 15 Sep 24 11:45 PDT |
	| start   | -p kubernetes-upgrade-902000          | kubernetes-upgrade-902000 | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-902000          | kubernetes-upgrade-902000 | jenkins | v1.34.0 | 15 Sep 24 11:45 PDT | 15 Sep 24 11:45 PDT |
	| start   | -p stopped-upgrade-515000             | minikube                  | jenkins | v1.26.0 | 15 Sep 24 11:45 PDT | 15 Sep 24 11:46 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-515000 stop           | minikube                  | jenkins | v1.26.0 | 15 Sep 24 11:46 PDT | 15 Sep 24 11:46 PDT |
	| start   | -p stopped-upgrade-515000             | stopped-upgrade-515000    | jenkins | v1.34.0 | 15 Sep 24 11:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 11:46:23
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 11:46:23.887982    5437 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:46:23.888117    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:46:23.888121    5437 out.go:358] Setting ErrFile to fd 2...
	I0915 11:46:23.888124    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:46:23.888309    5437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:46:23.889427    5437 out.go:352] Setting JSON to false
	I0915 11:46:23.907918    5437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4546,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:46:23.907999    5437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:46:23.911625    5437 out.go:177] * [stopped-upgrade-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:46:23.919716    5437 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:46:23.919765    5437 notify.go:220] Checking for updates...
	I0915 11:46:23.926664    5437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:46:23.928126    5437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:46:23.931592    5437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:46:23.934626    5437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:46:23.937672    5437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:46:23.941012    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:46:23.944610    5437 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 11:46:23.947702    5437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:46:23.951569    5437 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:46:23.958636    5437 start.go:297] selected driver: qemu2
	I0915 11:46:23.958642    5437 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:23.958688    5437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:46:23.961559    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:46:23.961604    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:46:23.961626    5437 start.go:340] cluster config:
	{Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:23.961687    5437 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:46:23.968625    5437 out.go:177] * Starting "stopped-upgrade-515000" primary control-plane node in "stopped-upgrade-515000" cluster
	I0915 11:46:23.971601    5437 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:46:23.971636    5437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0915 11:46:23.971644    5437 cache.go:56] Caching tarball of preloaded images
	I0915 11:46:23.971735    5437 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:46:23.971741    5437 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0915 11:46:23.971799    5437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/config.json ...
	I0915 11:46:23.972292    5437 start.go:360] acquireMachinesLock for stopped-upgrade-515000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:46:23.972333    5437 start.go:364] duration metric: took 32.417µs to acquireMachinesLock for "stopped-upgrade-515000"
	I0915 11:46:23.972343    5437 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:46:23.972347    5437 fix.go:54] fixHost starting: 
	I0915 11:46:23.972457    5437 fix.go:112] recreateIfNeeded on stopped-upgrade-515000: state=Stopped err=<nil>
	W0915 11:46:23.972466    5437 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:46:23.980436    5437 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-515000" ...
	I0915 11:46:26.068640    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:23.984597    5437 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:46:23.984671    5437 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50515-:22,hostfwd=tcp::50516-:2376,hostname=stopped-upgrade-515000 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/disk.qcow2
	I0915 11:46:24.033062    5437 main.go:141] libmachine: STDOUT: 
	I0915 11:46:24.033106    5437 main.go:141] libmachine: STDERR: 
	I0915 11:46:24.033112    5437 main.go:141] libmachine: Waiting for VM to start (ssh -p 50515 docker@127.0.0.1)...
	I0915 11:46:31.071190    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:31.071321    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:31.082932    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:31.083022    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:31.094272    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:31.094369    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:31.105436    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:31.105511    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:31.116196    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:31.116280    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:31.127420    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:31.127503    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:31.138331    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:31.138407    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:31.151702    5283 logs.go:276] 0 containers: []
	W0915 11:46:31.151715    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:31.151773    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:31.166635    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:31.166652    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:31.166658    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:31.178752    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:31.178764    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:31.191147    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:31.191161    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:31.215838    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:31.215845    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:31.235124    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:31.235133    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:31.239726    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:31.239733    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:31.254536    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:31.254550    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:31.267416    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:31.267427    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:31.278913    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:31.278926    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:31.316666    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:31.316675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:31.330794    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:31.330804    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:31.351119    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:31.351130    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:31.363936    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:31.363951    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:31.387332    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:31.387342    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:31.399602    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:31.399613    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:31.438407    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:31.438422    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:31.456848    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:31.456859    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:33.970918    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:38.973273    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:38.973831    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:39.012843    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:39.013018    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:39.034915    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:39.035014    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:39.057775    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:39.057869    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:39.068795    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:39.068868    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:39.079235    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:39.079322    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:39.090343    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:39.090428    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:39.105047    5283 logs.go:276] 0 containers: []
	W0915 11:46:39.105062    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:39.105134    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:39.115227    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:39.115247    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:39.115253    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:39.119646    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:39.119655    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:39.159227    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:39.159240    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:39.171199    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:39.171211    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:39.185819    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:39.185832    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:39.198527    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:39.198539    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:39.212929    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:39.212945    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:39.226663    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:39.226675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:39.238431    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:39.238444    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:39.276248    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:39.276261    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:39.297767    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:39.297777    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:39.311170    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:39.311182    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:39.322284    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:39.322294    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:39.333732    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:39.333745    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:39.351018    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:39.351031    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:39.362686    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:39.362699    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:39.381088    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:39.381101    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:41.905772    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:43.879784    5437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/config.json ...
	I0915 11:46:43.880724    5437 machine.go:93] provisionDockerMachine start ...
	I0915 11:46:43.881071    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:43.881716    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:43.881736    5437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 11:46:43.970337    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 11:46:43.970374    5437 buildroot.go:166] provisioning hostname "stopped-upgrade-515000"
	I0915 11:46:43.970521    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:43.970796    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:43.970811    5437 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-515000 && echo "stopped-upgrade-515000" | sudo tee /etc/hostname
	I0915 11:46:44.056199    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-515000
	
	I0915 11:46:44.056287    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.056457    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.056479    5437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-515000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-515000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-515000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 11:46:44.127535    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 11:46:44.127548    5437 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1650/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1650/.minikube}
	I0915 11:46:44.127556    5437 buildroot.go:174] setting up certificates
	I0915 11:46:44.127561    5437 provision.go:84] configureAuth start
	I0915 11:46:44.127565    5437 provision.go:143] copyHostCerts
	I0915 11:46:44.127643    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem, removing ...
	I0915 11:46:44.127651    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem
	I0915 11:46:44.127806    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem (1078 bytes)
	I0915 11:46:44.128010    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem, removing ...
	I0915 11:46:44.128015    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem
	I0915 11:46:44.128292    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem (1123 bytes)
	I0915 11:46:44.128414    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem, removing ...
	I0915 11:46:44.128420    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem
	I0915 11:46:44.128486    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem (1679 bytes)
	I0915 11:46:44.128596    5437 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-515000 san=[127.0.0.1 localhost minikube stopped-upgrade-515000]
	I0915 11:46:44.324753    5437 provision.go:177] copyRemoteCerts
	I0915 11:46:44.324810    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 11:46:44.324821    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.361996    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 11:46:44.368797    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0915 11:46:44.375286    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 11:46:44.382126    5437 provision.go:87] duration metric: took 254.557875ms to configureAuth
	I0915 11:46:44.382134    5437 buildroot.go:189] setting minikube options for container-runtime
	I0915 11:46:44.382245    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:46:44.382287    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.382380    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.382386    5437 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 11:46:44.448750    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0915 11:46:44.448761    5437 buildroot.go:70] root file system type: tmpfs
	I0915 11:46:44.448815    5437 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 11:46:44.448880    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.448999    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.449034    5437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 11:46:44.516091    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 11:46:44.516152    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.516313    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.516324    5437 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 11:46:44.850192    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0915 11:46:44.850208    5437 machine.go:96] duration metric: took 969.483833ms to provisionDockerMachine
	I0915 11:46:44.850215    5437 start.go:293] postStartSetup for "stopped-upgrade-515000" (driver="qemu2")
	I0915 11:46:44.850222    5437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 11:46:44.850284    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 11:46:44.850293    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.885586    5437 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 11:46:44.886922    5437 info.go:137] Remote host: Buildroot 2021.02.12
	I0915 11:46:44.886930    5437 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/addons for local assets ...
	I0915 11:46:44.887012    5437 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/files for local assets ...
	I0915 11:46:44.887105    5437 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0915 11:46:44.887211    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 11:46:44.890319    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:46:44.897460    5437 start.go:296] duration metric: took 47.239417ms for postStartSetup
	I0915 11:46:44.897473    5437 fix.go:56] duration metric: took 20.925384s for fixHost
	I0915 11:46:44.897521    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.897626    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.897631    5437 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 11:46:44.960390    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726426005.155784879
	
	I0915 11:46:44.960400    5437 fix.go:216] guest clock: 1726426005.155784879
	I0915 11:46:44.960405    5437 fix.go:229] Guest: 2024-09-15 11:46:45.155784879 -0700 PDT Remote: 2024-09-15 11:46:44.897475 -0700 PDT m=+21.037902418 (delta=258.309879ms)
	I0915 11:46:44.960417    5437 fix.go:200] guest clock delta is within tolerance: 258.309879ms
	I0915 11:46:44.960422    5437 start.go:83] releasing machines lock for "stopped-upgrade-515000", held for 20.988339834s
	I0915 11:46:44.960498    5437 ssh_runner.go:195] Run: cat /version.json
	I0915 11:46:44.960508    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.960646    5437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 11:46:44.960702    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	W0915 11:46:44.961283    5437 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50515: connect: connection refused
	I0915 11:46:44.961302    5437 retry.go:31] will retry after 278.314216ms: dial tcp [::1]:50515: connect: connection refused
	W0915 11:46:45.291646    5437 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0915 11:46:45.291784    5437 ssh_runner.go:195] Run: systemctl --version
	I0915 11:46:45.295785    5437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 11:46:45.299089    5437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 11:46:45.299147    5437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0915 11:46:45.304752    5437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0915 11:46:45.312072    5437 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 11:46:45.312090    5437 start.go:495] detecting cgroup driver to use...
	I0915 11:46:45.312196    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:46:45.322326    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0915 11:46:45.326582    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 11:46:45.330633    5437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 11:46:45.330675    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 11:46:45.334399    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:46:45.337861    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 11:46:45.341100    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:46:45.344001    5437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 11:46:45.347080    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 11:46:45.350269    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 11:46:45.353529    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 11:46:45.356248    5437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 11:46:45.359166    5437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 11:46:45.362201    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:45.422183    5437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 11:46:45.428681    5437 start.go:495] detecting cgroup driver to use...
	I0915 11:46:45.428736    5437 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 11:46:45.438258    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:46:45.443125    5437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 11:46:45.449216    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:46:45.453953    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 11:46:45.458625    5437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0915 11:46:45.506713    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 11:46:45.511731    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:46:45.517074    5437 ssh_runner.go:195] Run: which cri-dockerd
	I0915 11:46:45.518368    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 11:46:45.521297    5437 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0915 11:46:45.526315    5437 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 11:46:45.588450    5437 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 11:46:45.648857    5437 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 11:46:45.648925    5437 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 11:46:45.654189    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:45.713017    5437 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:46:46.859958    5437 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146939208s)
	I0915 11:46:46.860029    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 11:46:46.864930    5437 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0915 11:46:46.871136    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:46:46.876219    5437 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 11:46:46.940090    5437 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 11:46:47.012197    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:47.076282    5437 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 11:46:47.082803    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:46:47.088591    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:47.151799    5437 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 11:46:47.193458    5437 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 11:46:47.193548    5437 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 11:46:47.196086    5437 start.go:563] Will wait 60s for crictl version
	I0915 11:46:47.196155    5437 ssh_runner.go:195] Run: which crictl
	I0915 11:46:47.197666    5437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 11:46:47.213238    5437 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0915 11:46:47.213323    5437 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:46:47.230436    5437 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:46:46.908326    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:46.908430    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:46.921144    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:46.921232    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:46.931841    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:46.931925    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:46.943663    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:46.943751    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:46.954650    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:46.954736    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:46.965482    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:46.965567    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:46.976271    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:46.976354    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:46.987061    5283 logs.go:276] 0 containers: []
	W0915 11:46:46.987075    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:46.987150    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:46.997967    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:46.997984    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:46.997990    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:47.015999    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:47.016010    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:47.028314    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:47.028329    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:47.040769    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:47.040781    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:47.056217    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:47.056233    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:47.068114    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:47.068126    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:47.107113    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:47.107133    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:47.112181    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:47.112193    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:47.148660    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:47.148677    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:47.169586    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:47.169600    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:47.184885    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:47.184902    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:47.201546    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:47.201558    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:47.222044    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:47.222057    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:47.246387    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:47.246404    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:47.268921    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:47.268930    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:47.281332    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:47.281346    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:47.293624    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:47.293640    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:47.248966    5437 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0915 11:46:47.249056    5437 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0915 11:46:47.250511    5437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 11:46:47.254765    5437 kubeadm.go:883] updating cluster {Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0915 11:46:47.254821    5437 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:46:47.254884    5437 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:46:47.265972    5437 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:46:47.265981    5437 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:46:47.266034    5437 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:46:47.269427    5437 ssh_runner.go:195] Run: which lz4
	I0915 11:46:47.270775    5437 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 11:46:47.272138    5437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 11:46:47.272155    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0915 11:46:48.195281    5437 docker.go:649] duration metric: took 924.554625ms to copy over tarball
	I0915 11:46:48.195350    5437 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 11:46:49.808701    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:49.351397    5437 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156047083s)
	I0915 11:46:49.351414    5437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 11:46:49.367151    5437 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:46:49.370675    5437 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0915 11:46:49.375777    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:49.440823    5437 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:46:51.152747    5437 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.711927125s)
	I0915 11:46:51.152856    5437 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:46:51.165129    5437 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:46:51.165139    5437 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:46:51.165145    5437 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 11:46:51.169223    5437 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:51.171290    5437 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.173149    5437 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:51.173179    5437 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.175086    5437 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.175212    5437 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.176438    5437 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.176539    5437 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.177559    5437 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.177563    5437 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.178737    5437 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.179891    5437 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.180152    5437 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.181187    5437 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0915 11:46:51.182585    5437 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.183318    5437 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0915 11:46:51.571741    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.585071    5437 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0915 11:46:51.585105    5437 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.585176    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.588497    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.603870    5437 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0915 11:46:51.603900    5437 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.603971    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.604121    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0915 11:46:51.617452    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0915 11:46:51.622625    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632823    5437 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0915 11:46:51.632845    5437 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632915    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632917    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.640503    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.646298    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0915 11:46:51.646576    5437 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0915 11:46:51.646594    5437 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.646652    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.655121    5437 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0915 11:46:51.655144    5437 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.655222    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.662439    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0915 11:46:51.668464    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0915 11:46:51.687622    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0915 11:46:51.694995    5437 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0915 11:46:51.695138    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.697698    5437 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0915 11:46:51.697720    5437 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0915 11:46:51.697772    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0915 11:46:51.711079    5437 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0915 11:46:51.711100    5437 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.711165    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.713043    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0915 11:46:51.713159    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0915 11:46:51.722187    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0915 11:46:51.722202    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0915 11:46:51.722213    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0915 11:46:51.722311    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:46:51.724704    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0915 11:46:51.724719    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0915 11:46:51.735856    5437 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0915 11:46:51.735883    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0915 11:46:51.776774    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0915 11:46:51.780797    5437 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:46:51.780806    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0915 11:46:51.816704    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0915 11:46:52.033673    5437 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0915 11:46:52.033995    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.065685    5437 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0915 11:46:52.065726    5437 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.065858    5437 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.089909    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0915 11:46:52.090093    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:46:52.092160    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0915 11:46:52.092182    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0915 11:46:52.124669    5437 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:46:52.124685    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0915 11:46:52.358373    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0915 11:46:52.358412    5437 cache_images.go:92] duration metric: took 1.193265084s to LoadCachedImages
	W0915 11:46:52.358454    5437 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0915 11:46:52.358462    5437 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0915 11:46:52.358509    5437 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-515000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 11:46:52.358600    5437 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 11:46:52.372088    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:46:52.372102    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:46:52.372106    5437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 11:46:52.372116    5437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-515000 NodeName:stopped-upgrade-515000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 11:46:52.372182    5437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-515000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 11:46:52.372243    5437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0915 11:46:52.375217    5437 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 11:46:52.375248    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 11:46:52.378154    5437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0915 11:46:52.382996    5437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 11:46:52.387894    5437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0915 11:46:52.393139    5437 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0915 11:46:52.394224    5437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 11:46:52.398299    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:52.456120    5437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:46:52.461670    5437 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000 for IP: 10.0.2.15
	I0915 11:46:52.461682    5437 certs.go:194] generating shared ca certs ...
	I0915 11:46:52.461690    5437 certs.go:226] acquiring lock for ca certs: {Name:mkae14c7548e7e09ac75f5a854dc2935289ebc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.461846    5437 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key
	I0915 11:46:52.461883    5437 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key
	I0915 11:46:52.461888    5437 certs.go:256] generating profile certs ...
	I0915 11:46:52.461947    5437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key
	I0915 11:46:52.461963    5437 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb
	I0915 11:46:52.461972    5437 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0915 11:46:52.572755    5437 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb ...
	I0915 11:46:52.572774    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb: {Name:mkf2e38a464651807a582ee966b82ec0b7cc1e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.573090    5437 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb ...
	I0915 11:46:52.573094    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb: {Name:mk343596d640a172cbd21cac5c220f0c028bad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.573237    5437 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt
	I0915 11:46:52.573606    5437 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key
	I0915 11:46:52.573758    5437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.key
	I0915 11:46:52.573878    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem (1338 bytes)
	W0915 11:46:52.573907    5437 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0915 11:46:52.573926    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 11:46:52.573958    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem (1078 bytes)
	I0915 11:46:52.573979    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem (1123 bytes)
	I0915 11:46:52.573997    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem (1679 bytes)
	I0915 11:46:52.574056    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:46:52.574373    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 11:46:52.581519    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 11:46:52.588330    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 11:46:52.595454    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 11:46:52.602833    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 11:46:52.610962    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 11:46:52.618765    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 11:46:52.625968    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 11:46:52.633068    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0915 11:46:52.639640    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 11:46:52.646649    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0915 11:46:52.653073    5437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 11:46:52.657987    5437 ssh_runner.go:195] Run: openssl version
	I0915 11:46:52.659793    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0915 11:46:52.663237    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.664836    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:11 /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.664867    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.666514    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0915 11:46:52.669521    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0915 11:46:52.672427    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.673711    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:11 /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.673734    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.675378    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 11:46:52.678725    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 11:46:52.681911    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.683289    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.683307    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.685130    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 11:46:52.687931    5437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 11:46:52.689356    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 11:46:52.691116    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 11:46:52.692833    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 11:46:52.694536    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 11:46:52.696390    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 11:46:52.698124    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 11:46:52.699967    5437 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:52.700037    5437 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:46:52.710623    5437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 11:46:52.714047    5437 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 11:46:52.714060    5437 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 11:46:52.714085    5437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 11:46:52.717635    5437 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:46:52.717935    5437 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-515000" does not appear in /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:46:52.718034    5437 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1650/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-515000" cluster setting kubeconfig missing "stopped-upgrade-515000" context setting]
	I0915 11:46:52.718228    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.718727    5437 kapi.go:59] client config for stopped-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104435800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:46:52.719055    5437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 11:46:52.722133    5437 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-515000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0915 11:46:52.722139    5437 kubeadm.go:1160] stopping kube-system containers ...
	I0915 11:46:52.722187    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:46:52.734296    5437 docker.go:483] Stopping containers: [3c2c62219606 430d8ca67bc4 66a874cf4b12 c1d50cfb639e 65c77278924b a674ca46f50d 14151d79a4b7 40d74a81f121]
	I0915 11:46:52.734373    5437 ssh_runner.go:195] Run: docker stop 3c2c62219606 430d8ca67bc4 66a874cf4b12 c1d50cfb639e 65c77278924b a674ca46f50d 14151d79a4b7 40d74a81f121
	I0915 11:46:52.745213    5437 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 11:46:52.750885    5437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:46:52.753897    5437 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:46:52.753903    5437 kubeadm.go:157] found existing configuration files:
	
	I0915 11:46:52.753927    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf
	I0915 11:46:52.756611    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:46:52.756641    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:46:52.759375    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf
	I0915 11:46:52.762204    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:46:52.762230    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:46:52.764737    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf
	I0915 11:46:52.767300    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:46:52.767324    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:46:52.770584    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf
	I0915 11:46:52.773252    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:46:52.773282    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:46:52.775694    5437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:46:52.778733    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:52.803026    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.211172    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.325265    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.345718    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.372610    5437 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:46:53.372699    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:53.873890    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:54.810468    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:54.810581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:46:54.822215    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:46:54.822299    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:46:54.832700    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:46:54.832793    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:46:54.842988    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:46:54.843071    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:46:54.853861    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:46:54.853944    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:46:54.863973    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:46:54.864042    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:46:54.874995    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:46:54.875074    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:46:54.885495    5283 logs.go:276] 0 containers: []
	W0915 11:46:54.885509    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:46:54.885571    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:46:54.900175    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:46:54.900193    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:46:54.900199    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:46:54.920207    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:46:54.920217    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:46:54.932362    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:46:54.932374    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:46:54.944060    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:46:54.944070    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:46:54.958458    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:46:54.958468    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:46:54.969660    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:46:54.969671    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:54.981385    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:46:54.981402    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:46:55.016813    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:46:55.016824    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:46:55.020978    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:46:55.020987    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:46:55.058064    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:46:55.058077    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:46:55.072147    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:46:55.072160    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:46:55.086766    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:46:55.086779    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:46:55.114879    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:46:55.114892    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:46:55.132074    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:46:55.132084    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:46:55.156091    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:46:55.156102    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:46:55.173627    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:46:55.173640    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:46:55.187315    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:46:55.187324    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:46:57.701322    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:54.374759    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:54.379615    5437 api_server.go:72] duration metric: took 1.007018875s to wait for apiserver process to appear ...
	I0915 11:46:54.379624    5437 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:46:54.379637    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:02.703618    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:02.703893    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:02.729625    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:47:02.729767    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:02.745765    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:47:02.745874    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:02.758848    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:47:02.758935    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:02.770500    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:47:02.770581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:02.780617    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:47:02.780702    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:02.791041    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:47:02.791122    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:02.800804    5283 logs.go:276] 0 containers: []
	W0915 11:47:02.800816    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:02.800890    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:02.811786    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:47:02.811805    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:02.811813    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:02.835413    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:47:02.835420    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:47:02.852452    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:47:02.852462    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:02.865012    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:02.865024    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:02.901661    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:02.901672    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:02.937532    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:47:02.937544    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:47:02.952052    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:47:02.952063    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:47:02.973200    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:47:02.973215    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:47:02.983989    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:47:02.984002    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:47:02.998104    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:47:02.998114    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:47:03.015711    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:47:03.015724    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:47:03.027729    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:47:03.027741    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:47:03.039089    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:47:03.039101    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:47:03.051825    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:03.051836    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:03.056562    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:47:03.056572    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:47:03.068410    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:47:03.068421    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:47:03.080367    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:47:03.080381    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:46:59.381692    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:59.381760    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:05.592779    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:04.382235    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:04.382355    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:10.595098    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:10.595215    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:10.606903    5283 logs.go:276] 2 containers: [6bc3b7ef5b7e 9fbf46ad5e75]
	I0915 11:47:10.606990    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:10.617798    5283 logs.go:276] 2 containers: [02c44962b551 641fb718dc87]
	I0915 11:47:10.617884    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:10.628800    5283 logs.go:276] 1 containers: [47a41d45e2ac]
	I0915 11:47:10.628882    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:10.639822    5283 logs.go:276] 2 containers: [ae2d600f102e 3373156fd94c]
	I0915 11:47:10.639912    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:10.650538    5283 logs.go:276] 1 containers: [909572fdf77f]
	I0915 11:47:10.650624    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:10.661652    5283 logs.go:276] 2 containers: [82a4311ce7ea a5e082780bcb]
	I0915 11:47:10.661735    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:10.672099    5283 logs.go:276] 0 containers: []
	W0915 11:47:10.672111    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:10.672186    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:10.683269    5283 logs.go:276] 2 containers: [e4fcaa4dc8fc 857b28d450f2]
	I0915 11:47:10.683290    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:10.683297    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:10.719399    5283 logs.go:123] Gathering logs for kube-apiserver [9fbf46ad5e75] ...
	I0915 11:47:10.719410    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fbf46ad5e75"
	I0915 11:47:10.759134    5283 logs.go:123] Gathering logs for kube-scheduler [3373156fd94c] ...
	I0915 11:47:10.759145    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3373156fd94c"
	I0915 11:47:10.771357    5283 logs.go:123] Gathering logs for storage-provisioner [e4fcaa4dc8fc] ...
	I0915 11:47:10.771368    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4fcaa4dc8fc"
	I0915 11:47:10.783039    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:10.783049    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:10.821204    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:10.821217    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:10.825531    5283 logs.go:123] Gathering logs for etcd [641fb718dc87] ...
	I0915 11:47:10.825538    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 641fb718dc87"
	I0915 11:47:10.843390    5283 logs.go:123] Gathering logs for kube-controller-manager [82a4311ce7ea] ...
	I0915 11:47:10.843401    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82a4311ce7ea"
	I0915 11:47:10.861201    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:47:10.861212    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:10.873267    5283 logs.go:123] Gathering logs for kube-apiserver [6bc3b7ef5b7e] ...
	I0915 11:47:10.873279    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6bc3b7ef5b7e"
	I0915 11:47:10.887374    5283 logs.go:123] Gathering logs for etcd [02c44962b551] ...
	I0915 11:47:10.887386    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02c44962b551"
	I0915 11:47:10.901236    5283 logs.go:123] Gathering logs for coredns [47a41d45e2ac] ...
	I0915 11:47:10.901247    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47a41d45e2ac"
	I0915 11:47:10.913026    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:10.913038    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:10.934734    5283 logs.go:123] Gathering logs for kube-scheduler [ae2d600f102e] ...
	I0915 11:47:10.934744    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2d600f102e"
	I0915 11:47:10.946201    5283 logs.go:123] Gathering logs for kube-proxy [909572fdf77f] ...
	I0915 11:47:10.946215    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 909572fdf77f"
	I0915 11:47:10.958310    5283 logs.go:123] Gathering logs for kube-controller-manager [a5e082780bcb] ...
	I0915 11:47:10.958322    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5e082780bcb"
	I0915 11:47:10.969827    5283 logs.go:123] Gathering logs for storage-provisioner [857b28d450f2] ...
	I0915 11:47:10.969841    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 857b28d450f2"
	I0915 11:47:13.483943    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:09.383250    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:09.383292    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:18.486208    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:18.486289    5283 kubeadm.go:597] duration metric: took 4m4.210233792s to restartPrimaryControlPlane
	W0915 11:47:18.486357    5283 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 11:47:18.486387    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0915 11:47:14.384557    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:14.384658    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:19.457201    5283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 11:47:19.462343    5283 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:47:19.465104    5283 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:47:19.467878    5283 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:47:19.467885    5283 kubeadm.go:157] found existing configuration files:
	
	I0915 11:47:19.467914    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf
	I0915 11:47:19.470904    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:47:19.470935    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:47:19.473879    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf
	I0915 11:47:19.476379    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:47:19.476408    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:47:19.479485    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf
	I0915 11:47:19.482587    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:47:19.482611    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:47:19.485139    5283 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf
	I0915 11:47:19.487816    5283 kubeadm.go:163] "https://control-plane.minikube.internal:50310" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50310 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:47:19.487840    5283 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:47:19.491168    5283 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 11:47:19.507949    5283 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0915 11:47:19.507977    5283 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 11:47:19.558293    5283 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 11:47:19.558349    5283 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 11:47:19.558448    5283 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 11:47:19.610438    5283 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 11:47:19.613586    5283 out.go:235]   - Generating certificates and keys ...
	I0915 11:47:19.613646    5283 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 11:47:19.613680    5283 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 11:47:19.613730    5283 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 11:47:19.613760    5283 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 11:47:19.613795    5283 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 11:47:19.613831    5283 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 11:47:19.613866    5283 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 11:47:19.613897    5283 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 11:47:19.613934    5283 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 11:47:19.613979    5283 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 11:47:19.614010    5283 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 11:47:19.614042    5283 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 11:47:19.652403    5283 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 11:47:19.839426    5283 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 11:47:19.984038    5283 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 11:47:20.021112    5283 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 11:47:20.049757    5283 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 11:47:20.050149    5283 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 11:47:20.050176    5283 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 11:47:20.144301    5283 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 11:47:20.148466    5283 out.go:235]   - Booting up control plane ...
	I0915 11:47:20.148521    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 11:47:20.148558    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 11:47:20.148613    5283 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 11:47:20.148667    5283 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 11:47:20.148763    5283 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 11:47:19.385993    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:19.386015    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:25.151381    5283 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002954 seconds
	I0915 11:47:25.151475    5283 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 11:47:25.156791    5283 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 11:47:25.666417    5283 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 11:47:25.666547    5283 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-196000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 11:47:26.170081    5283 kubeadm.go:310] [bootstrap-token] Using token: sxdjwk.a6hmvxcjy86judm9
	I0915 11:47:26.175792    5283 out.go:235]   - Configuring RBAC rules ...
	I0915 11:47:26.175845    5283 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 11:47:26.175884    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 11:47:26.180336    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 11:47:26.181181    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 11:47:26.182002    5283 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 11:47:26.182748    5283 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 11:47:26.186881    5283 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 11:47:26.347235    5283 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 11:47:26.574090    5283 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 11:47:26.574521    5283 kubeadm.go:310] 
	I0915 11:47:26.574553    5283 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 11:47:26.574556    5283 kubeadm.go:310] 
	I0915 11:47:26.574592    5283 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 11:47:26.574601    5283 kubeadm.go:310] 
	I0915 11:47:26.574617    5283 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 11:47:26.574653    5283 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 11:47:26.574682    5283 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 11:47:26.574685    5283 kubeadm.go:310] 
	I0915 11:47:26.574711    5283 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 11:47:26.574714    5283 kubeadm.go:310] 
	I0915 11:47:26.574736    5283 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 11:47:26.574742    5283 kubeadm.go:310] 
	I0915 11:47:26.574771    5283 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 11:47:26.574807    5283 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 11:47:26.574845    5283 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 11:47:26.574849    5283 kubeadm.go:310] 
	I0915 11:47:26.574901    5283 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 11:47:26.574948    5283 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 11:47:26.574952    5283 kubeadm.go:310] 
	I0915 11:47:26.575011    5283 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sxdjwk.a6hmvxcjy86judm9 \
	I0915 11:47:26.575103    5283 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd \
	I0915 11:47:26.575116    5283 kubeadm.go:310] 	--control-plane 
	I0915 11:47:26.575118    5283 kubeadm.go:310] 
	I0915 11:47:26.575154    5283 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 11:47:26.575162    5283 kubeadm.go:310] 
	I0915 11:47:26.575202    5283 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sxdjwk.a6hmvxcjy86judm9 \
	I0915 11:47:26.575269    5283 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd 
	I0915 11:47:26.575329    5283 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 11:47:26.575338    5283 cni.go:84] Creating CNI manager for ""
	I0915 11:47:26.575346    5283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:47:26.578250    5283 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 11:47:26.584219    5283 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 11:47:26.587664    5283 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 11:47:26.592622    5283 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 11:47:26.592676    5283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 11:47:26.592694    5283 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-196000 minikube.k8s.io/updated_at=2024_09_15T11_47_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=running-upgrade-196000 minikube.k8s.io/primary=true
	I0915 11:47:26.633105    5283 ops.go:34] apiserver oom_adj: -16
	I0915 11:47:26.633103    5283 kubeadm.go:1113] duration metric: took 40.475291ms to wait for elevateKubeSystemPrivileges
	I0915 11:47:26.633202    5283 kubeadm.go:394] duration metric: took 4m12.370834958s to StartCluster
	I0915 11:47:26.633216    5283 settings.go:142] acquiring lock: {Name:mke41fab1fd2ef0229fde23400affd11462eeb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:47:26.633312    5283 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:47:26.633685    5283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:47:26.633907    5283 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:47:26.633919    5283 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 11:47:26.633970    5283 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-196000"
	I0915 11:47:26.633977    5283 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-196000"
	I0915 11:47:26.633995    5283 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-196000"
	I0915 11:47:26.634004    5283 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-196000"
	W0915 11:47:26.634008    5283 addons.go:243] addon storage-provisioner should already be in state true
	I0915 11:47:26.634018    5283 host.go:66] Checking if "running-upgrade-196000" exists ...
	I0915 11:47:26.634062    5283 config.go:182] Loaded profile config "running-upgrade-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:47:26.634884    5283 kapi.go:59] client config for running-upgrade-196000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/running-upgrade-196000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103ced800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:47:26.634998    5283 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-196000"
	W0915 11:47:26.635003    5283 addons.go:243] addon default-storageclass should already be in state true
	I0915 11:47:26.635018    5283 host.go:66] Checking if "running-upgrade-196000" exists ...
	I0915 11:47:26.638214    5283 out.go:177] * Verifying Kubernetes components...
	I0915 11:47:26.638509    5283 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 11:47:26.642309    5283 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 11:47:26.642316    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:47:26.646156    5283 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:47:26.650229    5283 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:47:26.654153    5283 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:47:26.654159    5283 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 11:47:26.654164    5283 sshutil.go:53] new ssh client: &{IP:localhost Port:50278 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/running-upgrade-196000/id_rsa Username:docker}
	I0915 11:47:26.737784    5283 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:47:26.742592    5283 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:47:26.742640    5283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:47:26.746437    5283 api_server.go:72] duration metric: took 112.521167ms to wait for apiserver process to appear ...
	I0915 11:47:26.746444    5283 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:47:26.746450    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:26.750966    5283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 11:47:26.776208    5283 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:47:27.109210    5283 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 11:47:27.109222    5283 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 11:47:24.387402    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:24.387440    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:31.748522    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:31.748562    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:29.389345    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:29.389424    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:36.748860    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:36.748907    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:34.391877    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:34.391987    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:41.749608    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:41.749632    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:39.393367    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:39.393398    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:46.750112    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:46.750170    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:44.395134    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:44.395233    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:51.750946    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:51.750984    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:49.397764    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:49.397811    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:56.751872    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:56.751896    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0915 11:47:57.111226    5283 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0915 11:47:57.114922    5283 out.go:177] * Enabled addons: storage-provisioner
	I0915 11:47:57.122927    5283 addons.go:510] duration metric: took 30.489387541s for enable addons: enabled=[storage-provisioner]
	I0915 11:47:54.400186    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:54.400453    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:54.421437    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:47:54.421573    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:54.436076    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:47:54.436173    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:54.448418    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:47:54.448503    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:54.459250    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:47:54.459334    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:54.470074    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:47:54.470152    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:54.484894    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:47:54.484970    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:54.494895    5437 logs.go:276] 0 containers: []
	W0915 11:47:54.494910    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:54.494982    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:54.505408    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:47:54.505425    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:47:54.505429    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:47:54.521174    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:47:54.521183    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:47:54.532949    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:54.532957    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:54.574175    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:54.574188    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:54.656220    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:47:54.656233    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:47:54.670651    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:47:54.670666    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:47:54.713689    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:47:54.713702    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:47:54.728670    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:47:54.728681    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:47:54.739987    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:47:54.739999    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:54.755764    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:47:54.755778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:47:54.769640    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:47:54.769649    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:47:54.781346    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:47:54.781355    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:47:54.798933    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:54.798949    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:54.824527    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:54.824540    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:54.828815    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:47:54.828822    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:47:54.843113    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:47:54.843126    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:47:57.360353    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:01.753269    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:01.753300    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:02.362658    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:02.363195    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:02.408505    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:02.408648    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:02.429614    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:02.429705    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:02.443901    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:02.443988    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:02.456298    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:02.456394    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:02.466945    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:02.467024    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:02.477329    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:02.477420    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:02.487622    5437 logs.go:276] 0 containers: []
	W0915 11:48:02.487633    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:02.487700    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:02.498652    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:02.498669    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:02.498674    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:02.503292    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:02.503299    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:02.540149    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:02.540159    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:02.555144    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:02.555158    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:02.567159    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:02.567172    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:02.582345    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:02.582360    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:02.593661    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:02.593674    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:02.609708    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:02.609724    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:02.624350    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:02.624367    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:02.649434    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:02.649443    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:02.688650    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:02.688661    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:02.703497    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:02.703510    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:02.715769    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:02.715779    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:02.727840    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:02.727854    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:02.767013    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:02.767023    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:02.784987    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:02.784997    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:06.754817    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:06.754858    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:05.300418    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:11.756796    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:11.756822    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:10.301447    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:10.301730    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:10.323950    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:10.324067    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:10.339846    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:10.339937    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:10.352488    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:10.352582    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:10.363569    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:10.363657    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:10.374141    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:10.374220    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:10.388441    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:10.388523    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:10.398614    5437 logs.go:276] 0 containers: []
	W0915 11:48:10.398629    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:10.398698    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:10.409057    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:10.409073    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:10.409078    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:10.449462    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:10.449478    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:10.464408    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:10.464417    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:10.475977    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:10.475988    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:10.513326    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:10.513336    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:10.549655    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:10.549667    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:10.562170    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:10.562188    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:10.576021    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:10.576035    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:10.589748    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:10.589761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:10.603835    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:10.603846    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:10.618322    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:10.618336    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:10.633391    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:10.633402    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:10.637670    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:10.637682    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:10.650584    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:10.650595    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:10.663097    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:10.663108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:10.680548    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:10.680559    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:13.206618    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:16.758920    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:16.758941    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:18.208898    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:18.209109    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:18.221634    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:18.221712    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:18.237891    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:18.237977    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:18.248566    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:18.248655    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:18.259019    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:18.259113    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:18.269115    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:18.269198    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:18.279412    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:18.279487    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:18.289798    5437 logs.go:276] 0 containers: []
	W0915 11:48:18.289810    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:18.289887    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:18.300164    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:18.300180    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:18.300186    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:18.326502    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:18.326514    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:18.339402    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:18.339417    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:18.376018    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:18.376032    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:18.390139    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:18.390152    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:18.403154    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:18.403164    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:18.419452    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:18.419466    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:18.457963    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:18.457976    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:18.495364    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:18.495374    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:18.506257    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:18.506267    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:18.521633    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:18.521642    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:18.525837    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:18.525843    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:18.539991    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:18.540005    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:18.551502    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:18.551512    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:18.565534    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:18.565544    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:18.579557    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:18.579569    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:21.761081    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:21.761139    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:21.101842    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:26.762155    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:26.762315    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:26.772718    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:26.772803    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:26.783912    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:26.783985    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:26.794985    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:26.795070    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:26.806144    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:26.806229    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:26.822865    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:26.822940    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:26.833592    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:26.833671    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:26.844089    5283 logs.go:276] 0 containers: []
	W0915 11:48:26.844101    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:26.844179    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:26.854532    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:26.854547    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:26.854554    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:26.871772    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:26.871782    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:26.882922    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:26.882932    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:26.917808    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:26.917818    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:26.922258    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:26.922263    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:26.956910    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:26.956925    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:26.971364    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:26.971376    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:26.986285    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:26.986296    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:26.997976    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:26.997988    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:27.009282    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:27.009295    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:27.023742    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:27.023755    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:27.034985    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:27.034997    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:27.046910    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:27.046922    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:26.104082    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:26.104316    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:26.127123    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:26.127231    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:26.141008    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:26.141102    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:26.155972    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:26.156059    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:26.169046    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:26.169139    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:26.179532    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:26.179616    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:26.189860    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:26.189943    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:26.200095    5437 logs.go:276] 0 containers: []
	W0915 11:48:26.200105    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:26.200177    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:26.210883    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:26.210903    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:26.210908    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:26.222843    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:26.222854    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:26.259977    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:26.259985    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:26.274483    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:26.274493    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:26.285392    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:26.285405    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:26.299340    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:26.299349    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:26.311830    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:26.311842    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:26.323554    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:26.323565    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:26.328324    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:26.328330    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:26.343061    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:26.343072    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:26.380913    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:26.380925    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:26.394909    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:26.394921    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:26.412688    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:26.412699    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:26.448675    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:26.448689    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:26.463393    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:26.463403    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:26.488718    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:26.488726    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:29.572176    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:29.005671    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:34.574467    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:34.574618    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:34.586789    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:34.586876    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:34.598187    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:34.598270    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:34.612182    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:34.612267    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:34.623077    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:34.623159    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:34.633781    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:34.633855    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:34.644422    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:34.644503    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:34.654384    5283 logs.go:276] 0 containers: []
	W0915 11:48:34.654397    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:34.654468    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:34.664711    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:34.664730    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:34.664735    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:34.678179    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:34.678190    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:34.711706    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:34.711717    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:34.725835    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:34.725851    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:34.741859    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:34.741874    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:34.753435    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:34.753444    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:34.767955    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:34.767965    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:34.779443    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:34.779452    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:34.791196    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:34.791211    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:34.795532    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:34.795538    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:34.833926    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:34.833937    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:34.847845    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:34.847857    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:34.865602    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:34.865613    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:37.390991    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:34.007950    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:34.008139    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:34.028547    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:34.028644    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:34.042948    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:34.043037    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:34.054209    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:34.054287    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:34.065268    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:34.065354    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:34.075473    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:34.075555    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:34.089865    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:34.089949    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:34.106205    5437 logs.go:276] 0 containers: []
	W0915 11:48:34.106218    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:34.106292    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:34.116363    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:34.116382    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:34.116387    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:34.133958    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:34.133969    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:34.145764    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:34.145774    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:34.157475    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:34.157487    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:34.173474    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:34.173483    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:34.211237    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:34.211247    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:34.225842    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:34.225853    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:34.239490    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:34.239499    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:34.256786    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:34.256796    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:34.268126    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:34.268137    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:34.281075    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:34.281090    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:34.321666    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:34.321676    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:34.355639    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:34.355651    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:34.370346    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:34.370355    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:34.394843    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:34.394855    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:34.399068    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:34.399073    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:36.916473    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:42.393373    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:42.393534    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:42.404907    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:42.404995    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:42.415195    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:42.415273    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:42.426062    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:42.426147    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:42.437709    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:42.437795    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:42.448202    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:42.448287    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:42.458268    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:42.458346    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:42.468360    5283 logs.go:276] 0 containers: []
	W0915 11:48:42.468372    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:42.468447    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:42.479058    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:42.479076    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:42.479083    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:42.490521    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:42.490533    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:42.495024    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:42.495033    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:42.534010    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:42.534021    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:42.549131    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:42.549142    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:42.561298    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:42.561309    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:42.579265    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:42.579277    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:42.604946    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:42.604959    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:42.616713    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:42.616730    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:42.652457    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:42.652466    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:42.674378    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:42.674390    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:42.686093    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:42.686106    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:42.702109    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:42.702120    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:41.918720    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:41.918900    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:41.935576    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:41.935682    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:41.948398    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:41.948482    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:41.963965    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:41.964037    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:41.974588    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:41.974676    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:41.985295    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:41.985381    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:41.995792    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:41.995875    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:42.007643    5437 logs.go:276] 0 containers: []
	W0915 11:48:42.007659    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:42.007725    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:42.018392    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:42.018409    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:42.018415    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:42.032313    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:42.032322    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:42.050178    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:42.050187    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:42.088852    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:42.088863    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:42.124387    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:42.124402    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:42.146176    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:42.146186    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:42.184750    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:42.184761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:42.198957    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:42.198973    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:42.210628    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:42.210638    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:42.236154    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:42.236169    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:42.252213    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:42.252223    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:42.266509    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:42.266518    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:42.281892    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:42.281909    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:42.293179    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:42.293188    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:42.305040    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:42.305051    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:42.316462    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:42.316475    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:45.219764    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:44.821951    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:50.222015    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:50.222116    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:50.235482    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:50.235561    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:50.246517    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:50.246606    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:50.257612    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:50.257700    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:50.268352    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:50.268430    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:50.278583    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:50.278674    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:50.288952    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:50.289023    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:50.299283    5283 logs.go:276] 0 containers: []
	W0915 11:48:50.299297    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:50.299370    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:50.310089    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:50.310106    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:50.310111    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:50.321708    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:50.321719    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:50.346387    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:50.346396    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:50.380609    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:50.380618    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:50.385235    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:50.385244    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:50.398848    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:50.398857    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:50.411029    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:50.411039    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:50.422991    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:50.423002    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:50.442283    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:50.442293    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:50.459942    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:50.459952    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:50.471448    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:50.471457    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:50.514120    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:50.514132    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:50.529108    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:50.529122    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:53.050037    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:49.824354    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:49.824852    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:49.858464    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:49.858618    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:49.877220    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:49.877324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:49.894148    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:49.894227    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:49.905232    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:49.905324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:49.916129    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:49.916208    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:49.927305    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:49.927390    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:49.942098    5437 logs.go:276] 0 containers: []
	W0915 11:48:49.942108    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:49.942184    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:49.953181    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:49.953200    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:49.953206    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:49.967407    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:49.967416    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:49.978975    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:49.978986    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:49.990694    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:49.990704    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:50.008414    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:50.008423    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:50.046791    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:50.046802    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:50.059189    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:50.059203    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:50.095110    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:50.095121    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:50.119904    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:50.119914    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:50.157194    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:50.157203    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:50.171827    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:50.171837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:50.185802    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:50.185813    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:50.197372    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:50.197383    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:50.215992    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:50.216001    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:50.234213    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:50.234224    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:50.246719    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:50.246727    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:52.753161    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:58.052307    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:58.052423    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:58.064453    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:48:58.064554    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:58.075971    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:48:58.076052    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:58.087463    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:48:58.087542    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:58.098375    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:48:58.098458    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:58.110580    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:48:58.110669    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:58.122082    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:48:58.122166    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:58.133196    5283 logs.go:276] 0 containers: []
	W0915 11:48:58.133209    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:58.133283    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:58.145416    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:48:58.145434    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:48:58.145440    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:48:58.158323    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:48:58.158335    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:48:58.178464    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:48:58.178476    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:48:58.190855    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:58.190866    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:58.226342    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:48:58.226353    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:48:58.240536    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:48:58.240545    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:48:58.251844    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:48:58.251859    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:48:58.269041    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:48:58.269053    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:48:58.282483    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:58.282498    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:58.306107    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:48:58.306120    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:58.318062    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:58.318074    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:58.350723    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:58.350733    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:58.355002    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:48:58.355010    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:48:57.755382    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:57.755562    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:57.767553    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:57.767646    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:57.778445    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:57.778525    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:57.789174    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:57.789255    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:57.799772    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:57.799847    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:57.815094    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:57.815171    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:57.826349    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:57.826428    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:57.836472    5437 logs.go:276] 0 containers: []
	W0915 11:48:57.836482    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:57.836537    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:57.847389    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:57.847406    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:57.847411    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:57.861873    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:57.861884    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:57.877696    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:57.877711    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:57.891871    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:57.891885    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:57.904634    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:57.904646    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:57.916280    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:57.916292    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:57.931659    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:57.931674    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:57.957331    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:57.957347    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:57.961842    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:57.961849    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:57.999050    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:57.999063    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:58.015174    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:58.015186    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:58.030197    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:58.030206    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:58.066716    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:58.066726    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:58.083392    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:58.083404    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:58.125072    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:58.125083    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:58.140978    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:58.140990    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:00.869237    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:00.668196    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:05.871429    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:05.871523    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:05.882154    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:05.882233    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:05.896622    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:05.896709    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:05.909487    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:05.909571    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:05.920998    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:05.921126    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:05.939544    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:05.939627    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:05.954293    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:05.954376    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:05.967238    5283 logs.go:276] 0 containers: []
	W0915 11:49:05.967251    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:05.967330    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:05.984723    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:05.984738    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:05.984744    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:05.997304    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:05.997316    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:06.013386    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:06.013397    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:06.026716    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:06.026730    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:06.045647    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:06.045658    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:06.059714    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:06.059726    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:06.074869    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:06.074880    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:06.089149    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:06.089162    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:06.135567    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:06.135583    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:06.147493    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:06.147506    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:06.159131    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:06.159143    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:06.184252    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:06.184262    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:06.219291    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:06.219300    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:05.669034    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:05.669167    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:05.683222    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:05.683323    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:05.694748    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:05.694849    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:05.705932    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:05.706012    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:05.716711    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:05.716798    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:05.727909    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:05.727992    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:05.739389    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:05.739461    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:05.749435    5437 logs.go:276] 0 containers: []
	W0915 11:49:05.749449    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:05.749507    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:05.759770    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:05.759788    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:05.759793    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:05.771443    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:05.771453    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:05.783172    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:05.783181    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:05.797999    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:05.798009    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:05.809741    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:05.809751    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:05.823346    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:05.823360    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:05.848075    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:05.848093    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:05.888381    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:05.888394    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:05.893316    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:05.893326    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:05.905986    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:05.906002    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:05.921531    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:05.921539    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:05.937109    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:05.937121    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:05.976295    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:05.976311    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:05.993160    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:05.993176    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:06.019042    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:06.019056    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:06.059865    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:06.059876    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:08.576578    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:08.725674    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:13.578848    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:13.579055    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:13.594384    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:13.594479    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:13.606619    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:13.606711    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:13.617229    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:13.617305    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:13.627883    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:13.627962    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:13.638607    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:13.638684    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:13.649005    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:13.649077    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:13.662953    5437 logs.go:276] 0 containers: []
	W0915 11:49:13.662965    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:13.663036    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:13.673526    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:13.673541    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:13.673546    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:13.688999    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:13.689009    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:13.706305    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:13.706320    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:13.751211    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:13.751221    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:13.764560    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:13.764571    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:13.769409    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:13.769419    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:13.781547    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:13.781559    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:13.794585    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:13.794599    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:13.814771    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:13.814788    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:13.829753    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:13.829767    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:13.842322    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:13.842337    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:13.860694    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:13.860707    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:13.727848    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:13.727959    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:13.739385    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:13.739471    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:13.750581    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:13.750662    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:13.761712    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:13.761804    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:13.773312    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:13.773401    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:13.785399    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:13.785484    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:13.797151    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:13.797235    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:13.808303    5283 logs.go:276] 0 containers: []
	W0915 11:49:13.808318    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:13.808398    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:13.820065    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:13.820082    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:13.820088    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:13.832499    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:13.832514    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:13.845220    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:13.845231    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:13.864969    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:13.864982    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:13.877317    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:13.877328    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:13.903007    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:13.903015    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:13.939602    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:13.939620    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:13.944534    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:13.944541    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:13.958917    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:13.958929    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:13.974976    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:13.974987    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:13.987920    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:13.987934    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:14.019126    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:14.019141    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:14.056390    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:14.056401    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:16.572310    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:13.902715    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:13.902731    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:13.942975    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:13.942985    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:13.962364    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:13.962375    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:13.988539    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:13.988552    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:16.507149    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:21.574459    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:21.574561    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:21.588171    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:21.588258    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:21.599640    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:21.599727    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:21.610830    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:21.610917    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:21.622341    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:21.622431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:21.633785    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:21.633871    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:21.645021    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:21.645103    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:21.656778    5283 logs.go:276] 0 containers: []
	W0915 11:49:21.656791    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:21.656863    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:21.668211    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:21.668227    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:21.668233    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:21.694911    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:21.694924    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:21.731740    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:21.731751    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:21.736754    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:21.736763    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:21.752254    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:21.752272    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:21.769218    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:21.769229    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:21.786282    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:21.786300    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:21.805783    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:21.805792    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:21.823858    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:21.823868    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:21.837233    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:21.837245    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:21.874957    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:21.874970    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:21.890392    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:21.890403    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:21.906361    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:21.906373    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:21.509397    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:21.509718    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:21.537122    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:21.537263    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:21.554070    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:21.554172    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:21.566902    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:21.566991    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:21.580134    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:21.580225    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:21.594887    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:21.594971    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:21.612294    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:21.612350    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:21.623549    5437 logs.go:276] 0 containers: []
	W0915 11:49:21.623557    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:21.623603    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:21.634950    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:21.634963    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:21.634969    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:21.651747    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:21.651763    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:21.656830    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:21.656837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:21.672405    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:21.672415    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:21.686759    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:21.686769    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:21.699945    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:21.699956    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:21.739749    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:21.739762    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:21.779327    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:21.779348    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:21.791259    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:21.791271    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:21.804051    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:21.804065    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:21.819549    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:21.819561    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:21.844359    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:21.844371    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:21.884510    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:21.884526    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:21.897097    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:21.897108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:21.916412    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:21.916429    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:21.928863    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:21.928874    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:24.419485    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:24.444715    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:29.421797    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:29.422083    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:29.442508    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:29.442621    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:29.458065    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:29.458159    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:29.471015    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:29.471106    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:29.482736    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:29.482825    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:29.494094    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:29.494173    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:29.505761    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:29.505842    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:29.516709    5283 logs.go:276] 0 containers: []
	W0915 11:49:29.516721    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:29.516795    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:29.528721    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:29.528737    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:29.528743    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:29.533879    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:29.533891    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:29.574630    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:29.574646    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:29.587119    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:29.587130    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:29.605719    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:29.605732    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:29.618095    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:29.618107    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:29.631318    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:29.631331    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:29.648256    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:29.648270    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:29.675340    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:29.675359    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:29.711540    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:29.711555    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:29.727088    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:29.727104    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:29.742045    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:29.742060    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:29.754348    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:29.754362    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:32.271730    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:29.446825    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:29.446949    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:29.461727    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:29.461819    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:29.474625    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:29.474707    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:29.491143    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:29.491226    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:29.507816    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:29.507889    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:29.526355    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:29.526439    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:29.537743    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:29.537837    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:29.548390    5437 logs.go:276] 0 containers: []
	W0915 11:49:29.548403    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:29.548475    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:29.559158    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:29.559176    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:29.559181    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:29.598843    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:29.598860    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:29.611502    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:29.611520    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:29.623756    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:29.623768    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:29.642466    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:29.642478    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:29.657382    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:29.657400    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:29.672939    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:29.672950    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:29.677729    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:29.677739    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:29.693080    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:29.693092    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:29.708672    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:29.708687    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:29.722104    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:29.722118    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:29.735450    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:29.735462    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:29.760552    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:29.760565    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:29.773260    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:29.773271    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:29.809209    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:29.809220    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:29.846748    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:29.846761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:32.362496    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:37.274050    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:37.274595    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:37.313961    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:37.314136    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:37.336380    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:37.336510    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:37.354339    5283 logs.go:276] 2 containers: [ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:37.354431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:37.366243    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:37.366312    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:37.377871    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:37.377925    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:37.389129    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:37.389177    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:37.399938    5283 logs.go:276] 0 containers: []
	W0915 11:49:37.399950    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:37.400026    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:37.411064    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:37.411083    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:37.411090    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:37.451628    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:37.451640    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:37.466444    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:37.466456    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:37.479681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:37.479694    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:37.492081    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:37.492097    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:37.504287    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:37.504300    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:37.517638    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:37.517650    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:37.555853    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:37.555871    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:37.561325    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:37.561341    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:37.576510    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:37.576526    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:37.589271    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:37.589284    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:37.605459    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:37.605473    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:37.639361    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:37.639376    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:37.364610    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:37.364721    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:37.376906    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:37.377017    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:37.388523    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:37.388614    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:37.400229    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:37.400274    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:37.411350    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:37.411432    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:37.422982    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:37.423067    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:37.434706    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:37.434787    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:37.446049    5437 logs.go:276] 0 containers: []
	W0915 11:49:37.446063    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:37.446143    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:37.458195    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:37.458215    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:37.458221    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:37.472828    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:37.472844    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:37.485919    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:37.485932    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:37.499421    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:37.499434    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:37.539136    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:37.539149    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:37.556167    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:37.556175    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:37.569946    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:37.569959    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:37.583734    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:37.583746    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:37.596948    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:37.596962    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:37.643313    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:37.643326    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:37.664623    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:37.664637    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:37.670460    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:37.670472    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:37.686230    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:37.686244    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:37.702014    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:37.702025    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:37.717387    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:37.717401    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:37.743796    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:37.743812    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:40.169914    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:40.284297    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:45.175199    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:45.175684    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:45.208867    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:45.209019    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:45.227583    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:45.227690    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:45.242593    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:45.242687    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:45.254232    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:45.254314    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:45.264402    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:45.264472    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:45.274921    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:45.275003    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:45.284866    5283 logs.go:276] 0 containers: []
	W0915 11:49:45.284878    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:45.284953    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:45.296467    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:45.296487    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:49:45.296493    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:49:45.314582    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:45.314591    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:45.327195    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:45.327208    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:45.339949    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:45.339961    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:45.358326    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:45.358338    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:45.363263    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:45.363274    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:45.405984    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:45.405995    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:45.421453    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:49:45.421465    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:49:45.435757    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:45.435771    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:45.448570    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:45.448581    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:45.475165    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:45.475174    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:45.490945    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:45.490955    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:45.528070    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:45.528085    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:45.547544    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:45.547553    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:45.560583    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:45.560595    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:48.083925    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:45.289489    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:45.289570    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:45.300840    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:45.300925    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:45.312949    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:45.313039    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:45.324653    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:45.324748    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:45.336735    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:45.336819    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:45.350335    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:45.350425    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:45.361512    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:45.361604    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:45.374441    5437 logs.go:276] 0 containers: []
	W0915 11:49:45.374453    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:45.374532    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:45.389638    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:45.389656    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:45.389663    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:45.401970    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:45.401983    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:45.416599    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:45.416613    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:45.429751    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:45.429766    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:45.442936    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:45.442948    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:45.483595    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:45.483611    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:45.488552    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:45.488564    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:45.504525    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:45.504535    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:45.546025    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:45.546039    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:45.561534    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:45.561543    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:45.578128    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:45.578144    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:45.595710    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:45.595727    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:45.616822    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:45.616831    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:45.651471    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:45.651481    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:45.671566    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:45.671576    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:45.696387    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:45.696397    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:48.213191    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:53.090173    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:53.090559    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:53.125120    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:49:53.125282    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:53.149929    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:49:53.150020    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:53.163253    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:49:53.163347    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:53.178071    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:49:53.178148    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:53.188701    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:49:53.188778    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:53.203483    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:49:53.203562    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:53.213939    5283 logs.go:276] 0 containers: []
	W0915 11:49:53.213950    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:53.214021    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:53.230147    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:49:53.230168    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:53.230174    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:53.235828    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:49:53.235840    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:49:53.248884    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:49:53.248896    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:49:53.261613    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:49:53.261627    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:49:53.283061    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:49:53.283072    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:53.295610    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:49:53.295620    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:49:53.310959    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:49:53.310972    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:49:53.327141    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:49:53.327159    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:49:53.350131    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:49:53.350144    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:49:53.370406    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:53.370424    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:53.397878    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:53.397893    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:53.434795    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:53.434806    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:53.472176    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:49:53.472184    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:49:53.484848    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:49:53.484860    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:49:53.500219    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:49:53.500237    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:49:53.219273    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:53.219356    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:53.231932    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:53.232014    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:53.244361    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:53.244447    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:53.256651    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:53.256734    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:53.268547    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:53.268626    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:53.280545    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:53.280631    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:53.294588    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:53.294673    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:53.305965    5437 logs.go:276] 0 containers: []
	W0915 11:49:53.305975    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:53.306046    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:53.317248    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:53.317266    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:53.317271    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:53.361565    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:53.361587    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:53.374440    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:53.374457    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:53.388954    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:53.388964    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:53.393286    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:53.393295    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:53.432683    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:53.432699    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:53.447921    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:53.447932    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:53.472159    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:53.472173    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:53.491765    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:53.491779    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:53.515367    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:53.515376    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:53.530374    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:53.530386    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:53.546883    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:53.546894    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:53.559178    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:53.559194    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:53.573772    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:53.573785    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:53.585595    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:53.585605    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:53.599657    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:53.599672    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:56.018484    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:56.140865    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:01.023360    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:01.023890    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:01.058499    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:01.058665    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:01.078113    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:01.078226    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:01.092609    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:01.092707    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:01.108995    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:01.109072    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:01.123999    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:01.124088    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:01.135234    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:01.135319    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:01.146230    5283 logs.go:276] 0 containers: []
	W0915 11:50:01.146240    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:01.146282    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:01.157508    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:01.157522    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:01.157527    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:01.162316    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:01.162328    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:01.178161    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:01.178175    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:01.196893    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:01.196911    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:01.209919    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:01.209932    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:01.237054    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:01.237074    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:01.274065    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:01.274082    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:01.289742    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:01.289757    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:01.302855    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:01.302869    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:01.315492    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:01.315506    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:01.329730    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:01.329745    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:01.343704    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:01.343717    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:01.380960    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:01.380971    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:01.396590    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:01.396606    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:01.409065    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:01.409076    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:01.145380    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:01.145499    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:01.157038    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:01.157128    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:01.168275    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:01.168366    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:01.180420    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:01.180509    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:01.191361    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:01.191454    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:01.203740    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:01.203823    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:01.215706    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:01.215797    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:01.227662    5437 logs.go:276] 0 containers: []
	W0915 11:50:01.227674    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:01.227749    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:01.239148    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:01.239172    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:01.239177    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:01.275443    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:01.275451    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:01.290832    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:01.290841    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:01.303604    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:01.303613    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:01.318521    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:01.318532    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:01.331065    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:01.331074    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:01.371086    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:01.371107    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:01.375957    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:01.375971    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:01.417302    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:01.417317    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:01.437566    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:01.437578    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:01.449294    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:01.449305    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:01.464059    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:01.464068    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:01.476248    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:01.476260    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:01.493501    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:01.493511    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:01.515798    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:01.515804    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:01.530525    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:01.530536    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:03.924043    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:04.049652    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:08.927690    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:08.927856    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:08.941861    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:08.941954    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:08.954078    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:08.954157    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:08.964756    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:08.964849    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:08.975383    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:08.975469    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:08.985744    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:08.985823    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:08.996432    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:08.996508    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:09.006872    5283 logs.go:276] 0 containers: []
	W0915 11:50:09.006884    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:09.006951    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:09.017577    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:09.017600    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:09.017605    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:09.034253    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:09.034263    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:09.046138    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:09.046149    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:09.061881    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:09.061894    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:09.074644    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:09.074658    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:09.087254    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:09.087268    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:09.129961    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:09.129975    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:09.155193    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:09.155206    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:09.160069    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:09.160081    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:09.174959    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:09.174971    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:09.192171    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:09.192179    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:09.204438    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:09.204452    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:09.242596    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:09.242615    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:09.259405    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:09.259419    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:09.276789    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:09.276804    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:11.791095    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:09.053236    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:09.053336    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:09.065235    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:09.065324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:09.076627    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:09.076711    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:09.088216    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:09.088299    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:09.099698    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:09.099784    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:09.111101    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:09.111187    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:09.124400    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:09.124524    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:09.135440    5437 logs.go:276] 0 containers: []
	W0915 11:50:09.135453    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:09.135525    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:09.146964    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:09.146981    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:09.146987    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:09.160921    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:09.160929    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:09.185688    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:09.185705    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:09.190711    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:09.190724    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:09.227839    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:09.227854    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:09.248295    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:09.248320    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:09.288113    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:09.288128    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:09.303134    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:09.303145    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:09.314826    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:09.314837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:09.329482    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:09.329496    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:09.341416    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:09.341426    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:09.355220    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:09.355229    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:09.367309    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:09.367320    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:09.384749    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:09.384760    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:09.397452    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:09.397461    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:09.411499    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:09.411509    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:11.951167    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:16.794185    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:16.794310    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:16.805594    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:16.805679    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:16.816522    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:16.816603    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:16.827292    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:16.827374    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:16.838027    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:16.838108    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:16.848967    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:16.849043    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:16.859471    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:16.859562    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:16.869369    5283 logs.go:276] 0 containers: []
	W0915 11:50:16.869380    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:16.869449    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:16.880431    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:16.880448    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:16.880454    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:16.885129    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:16.885137    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:16.899495    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:16.899509    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:16.911323    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:16.911341    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:16.947534    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:16.947548    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:16.969050    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:16.969066    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:16.981810    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:16.981821    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:16.995254    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:16.995267    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:17.021078    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:17.021091    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:17.056557    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:17.056568    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:17.068400    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:17.068412    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:17.082470    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:17.082483    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:17.095283    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:17.095296    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:17.110960    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:17.110972    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:17.123930    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:17.123944    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:16.953453    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:16.953569    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:16.964870    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:16.964967    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:16.977857    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:16.977942    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:16.989043    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:16.989126    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:17.000935    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:17.001025    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:17.012259    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:17.012373    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:17.023276    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:17.023357    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:17.034878    5437 logs.go:276] 0 containers: []
	W0915 11:50:17.034889    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:17.034967    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:17.045928    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:17.045946    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:17.045952    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:17.069082    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:17.069092    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:17.089788    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:17.089805    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:17.107051    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:17.107064    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:17.126000    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:17.126010    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:17.138779    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:17.138791    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:17.151466    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:17.151478    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:17.191134    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:17.191145    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:17.203363    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:17.203372    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:17.241499    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:17.241511    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:17.256453    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:17.256464    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:17.267685    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:17.267695    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:17.281565    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:17.281576    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:17.316108    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:17.316120    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:17.330211    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:17.330222    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:17.334731    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:17.334738    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:19.645503    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:19.851618    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:24.647645    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:24.647774    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:24.658800    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:24.658894    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:24.673868    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:24.673958    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:24.684702    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:24.684788    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:24.695396    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:24.695477    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:24.705896    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:24.705970    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:24.716391    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:24.716469    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:24.727000    5283 logs.go:276] 0 containers: []
	W0915 11:50:24.727011    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:24.727080    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:24.737526    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:24.737544    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:24.737550    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:24.749229    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:24.749244    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:24.760876    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:24.760889    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:24.776652    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:24.776663    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:24.794522    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:24.794532    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:24.819509    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:24.819520    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:24.859961    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:24.859977    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:24.872681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:24.872693    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:24.885906    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:24.885919    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:24.898811    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:24.898826    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:24.903705    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:24.903717    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:24.918948    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:24.918958    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:24.954757    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:24.954770    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:24.970352    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:24.970365    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:24.985314    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:24.985326    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:27.500126    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:24.854294    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:24.854395    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:24.869940    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:24.870028    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:24.880865    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:24.880954    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:24.892847    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:24.892935    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:24.904002    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:24.904084    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:24.915896    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:24.915985    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:24.927447    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:24.927535    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:24.941962    5437 logs.go:276] 0 containers: []
	W0915 11:50:24.941973    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:24.942050    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:24.953217    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:24.953236    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:24.953242    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:24.995259    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:24.995281    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:25.009936    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:25.009945    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:25.021924    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:25.021934    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:25.039591    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:25.039601    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:25.051259    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:25.051269    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:25.090390    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:25.090399    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:25.094587    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:25.094595    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:25.112781    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:25.112792    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:25.137305    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:25.137314    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:25.175674    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:25.175690    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:25.192516    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:25.192527    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:25.207955    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:25.207965    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:25.219610    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:25.219620    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:25.233806    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:25.233819    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:25.248276    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:25.248285    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:27.768294    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:32.503045    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:32.503257    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:32.518839    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:32.518940    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:32.531320    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:32.531408    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:32.542497    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:32.542581    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:32.556962    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:32.557046    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:32.570440    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:32.570521    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:32.581380    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:32.581464    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:32.592019    5283 logs.go:276] 0 containers: []
	W0915 11:50:32.592030    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:32.592098    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:32.602679    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:32.602701    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:32.602706    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:32.618693    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:32.618704    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:32.631136    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:32.631152    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:32.656619    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:32.656628    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:32.691317    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:32.691326    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:32.695599    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:32.695608    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:32.716760    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:32.716773    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:32.752670    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:32.752682    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:32.764498    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:32.764510    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:32.776875    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:32.776887    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:32.799948    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:32.799959    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:32.818748    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:32.818760    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:32.831837    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:32.831849    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:32.847831    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:32.847841    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:32.864065    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:32.864079    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:32.770800    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:32.770897    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:32.785551    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:32.785634    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:32.797443    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:32.797530    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:32.809322    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:32.809412    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:32.820979    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:32.821062    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:32.832662    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:32.832742    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:32.844414    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:32.844503    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:32.855720    5437 logs.go:276] 0 containers: []
	W0915 11:50:32.855732    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:32.855804    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:32.875753    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:32.875772    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:32.875778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:32.916568    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:32.916582    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:32.931213    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:32.931223    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:32.943294    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:32.943303    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:32.954831    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:32.954842    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:32.968963    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:32.968976    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:32.992506    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:32.992514    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:33.031355    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:33.031366    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:33.046314    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:33.046328    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:33.060098    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:33.060109    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:33.071483    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:33.071497    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:33.089584    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:33.089597    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:33.103662    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:33.103671    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:33.107986    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:33.107991    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:33.146583    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:33.146594    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:33.158528    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:33.158537    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:35.382362    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:35.670671    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:40.384940    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:40.385384    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:40.417101    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:40.417258    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:40.435884    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:40.435990    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:40.459419    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:40.459509    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:40.470627    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:40.470712    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:40.480881    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:40.480954    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:40.492870    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:40.492955    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:40.506782    5283 logs.go:276] 0 containers: []
	W0915 11:50:40.506793    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:40.506868    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:40.517042    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:40.517061    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:40.517066    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:40.534538    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:40.534547    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:40.552131    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:40.552141    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:40.577605    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:40.577618    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:40.612519    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:40.612534    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:40.624705    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:40.624716    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:40.639995    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:40.640006    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:40.651259    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:40.651270    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:40.655817    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:40.655826    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:40.694014    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:40.694036    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:40.715611    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:40.715625    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:40.729051    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:40.729065    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:40.741730    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:40.741743    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:40.754816    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:40.754828    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:40.767572    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:40.767584    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:43.283638    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:40.671756    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:40.671873    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:40.683350    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:40.683441    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:40.695670    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:40.695760    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:40.711362    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:40.711451    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:40.723080    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:40.723176    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:40.734503    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:40.734600    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:40.749507    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:40.749600    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:40.760909    5437 logs.go:276] 0 containers: []
	W0915 11:50:40.760922    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:40.760999    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:40.774095    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:40.774114    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:40.774119    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:40.813695    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:40.813706    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:40.827876    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:40.827885    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:40.841589    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:40.841599    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:40.862330    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:40.862340    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:40.866473    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:40.866479    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:40.880952    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:40.880964    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:40.906124    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:40.906140    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:40.929241    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:40.929261    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:40.965732    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:40.965746    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:40.977155    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:40.977166    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:41.014818    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:41.014830    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:41.026186    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:41.026196    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:41.038378    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:41.038389    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:41.054291    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:41.054305    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:41.068919    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:41.068930    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:43.583206    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:48.286075    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:48.286421    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:48.312446    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:48.312580    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:48.335658    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:48.335750    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:48.348462    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:48.348559    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:48.367107    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:48.367196    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:48.377451    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:48.377538    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:48.388077    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:48.388162    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:48.398765    5283 logs.go:276] 0 containers: []
	W0915 11:50:48.398777    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:48.398844    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:48.408848    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:48.408864    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:48.408869    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:48.420509    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:48.420521    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:48.438495    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:48.438508    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:48.451798    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:48.451812    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:48.485627    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:48.485637    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:48.519804    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:48.519819    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:48.531661    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:48.531675    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:48.544153    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:48.544163    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:48.548499    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:48.548507    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:48.560324    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:48.560333    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:48.576372    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:48.576385    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:48.600710    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:48.600725    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:48.616240    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:48.616257    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:48.635052    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:48.635065    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:48.651510    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:48.651523    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:48.585546    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:48.585631    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:48.597250    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:48.597337    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:48.608380    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:48.608472    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:48.622196    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:48.622283    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:48.636650    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:48.636746    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:48.648097    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:48.648185    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:48.659884    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:48.659970    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:48.670676    5437 logs.go:276] 0 containers: []
	W0915 11:50:48.670691    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:48.670769    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:48.683549    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:48.683566    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:48.683573    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:48.695964    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:48.695979    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:48.713359    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:48.713368    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:48.752937    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:48.752946    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:48.767947    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:48.767962    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:48.779587    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:48.779601    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:48.791055    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:48.791066    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:48.802340    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:48.802348    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:48.818332    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:48.818347    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:48.834747    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:48.834757    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:48.846397    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:48.846409    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:48.860716    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:48.860726    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:48.865346    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:48.865353    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:48.900768    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:48.900778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:51.166262    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:48.937984    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:48.937994    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:48.952066    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:48.952079    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:51.476724    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:56.479097    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:56.479130    5437 kubeadm.go:597] duration metric: took 4m3.747669416s to restartPrimaryControlPlane
	W0915 11:50:56.479165    5437 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 11:50:56.479177    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0915 11:50:57.453461    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 11:50:57.458675    5437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:50:57.461716    5437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:50:57.464539    5437 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:50:57.464545    5437 kubeadm.go:157] found existing configuration files:
	
	I0915 11:50:57.464576    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf
	I0915 11:50:57.467083    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:50:57.467117    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:50:57.469743    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf
	I0915 11:50:57.472639    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:50:57.472665    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:50:57.475395    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf
	I0915 11:50:57.477853    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:50:57.477877    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:50:57.480756    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf
	I0915 11:50:57.483306    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:50:57.483329    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:50:57.485964    5437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 11:50:57.502833    5437 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0915 11:50:57.502919    5437 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 11:50:57.550307    5437 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 11:50:57.550438    5437 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 11:50:57.550491    5437 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 11:50:57.609805    5437 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 11:50:57.614599    5437 out.go:235]   - Generating certificates and keys ...
	I0915 11:50:57.614636    5437 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 11:50:57.614664    5437 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 11:50:57.614702    5437 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 11:50:57.614732    5437 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 11:50:57.614772    5437 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 11:50:57.614798    5437 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 11:50:57.614831    5437 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 11:50:57.614963    5437 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 11:50:57.615037    5437 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 11:50:57.615108    5437 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 11:50:57.615128    5437 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 11:50:57.615160    5437 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 11:50:57.746207    5437 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 11:50:57.950659    5437 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 11:50:58.196950    5437 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 11:50:58.408678    5437 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 11:50:58.438519    5437 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 11:50:58.438903    5437 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 11:50:58.438960    5437 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 11:50:58.516244    5437 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 11:50:56.168672    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:56.168945    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:56.198501    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:50:56.198625    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:56.218454    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:50:56.218545    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:56.231037    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:50:56.231133    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:56.242804    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:50:56.242891    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:56.253574    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:50:56.253649    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:56.265054    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:50:56.265151    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:56.276354    5283 logs.go:276] 0 containers: []
	W0915 11:50:56.276367    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:56.276433    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:56.287385    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:50:56.287406    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:50:56.287412    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:50:56.299419    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:50:56.299431    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:50:56.311804    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:50:56.311817    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:50:56.324481    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:50:56.324496    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:50:56.340227    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:50:56.340240    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:50:56.352141    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:56.352154    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:56.392557    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:50:56.392573    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:50:56.407008    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:50:56.407020    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:50:56.421085    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:50:56.421096    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:56.433404    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:56.433415    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:56.438059    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:50:56.438068    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:50:56.450406    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:56.450419    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:56.473934    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:56.473947    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:56.510254    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:50:56.510277    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:50:56.522977    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:50:56.522991    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:50:58.520419    5437 out.go:235]   - Booting up control plane ...
	I0915 11:50:58.520461    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 11:50:58.520500    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 11:50:58.520541    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 11:50:58.520584    5437 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 11:50:58.520684    5437 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 11:50:59.045152    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:03.017216    5437 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501306 seconds
	I0915 11:51:03.017313    5437 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 11:51:03.021424    5437 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 11:51:03.544249    5437 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 11:51:03.544571    5437 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-515000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 11:51:04.048256    5437 kubeadm.go:310] [bootstrap-token] Using token: 19ou2y.372pn0rn1zo0hpgd
	I0915 11:51:04.054170    5437 out.go:235]   - Configuring RBAC rules ...
	I0915 11:51:04.054235    5437 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 11:51:04.054289    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 11:51:04.062746    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 11:51:04.064068    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 11:51:04.064833    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 11:51:04.066181    5437 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 11:51:04.070265    5437 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 11:51:04.250145    5437 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 11:51:04.452080    5437 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 11:51:04.452652    5437 kubeadm.go:310] 
	I0915 11:51:04.452684    5437 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 11:51:04.452689    5437 kubeadm.go:310] 
	I0915 11:51:04.452774    5437 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 11:51:04.452780    5437 kubeadm.go:310] 
	I0915 11:51:04.452792    5437 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 11:51:04.452837    5437 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 11:51:04.452867    5437 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 11:51:04.452870    5437 kubeadm.go:310] 
	I0915 11:51:04.452900    5437 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 11:51:04.452904    5437 kubeadm.go:310] 
	I0915 11:51:04.452935    5437 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 11:51:04.452939    5437 kubeadm.go:310] 
	I0915 11:51:04.452997    5437 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 11:51:04.453032    5437 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 11:51:04.453088    5437 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 11:51:04.453132    5437 kubeadm.go:310] 
	I0915 11:51:04.453207    5437 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 11:51:04.453291    5437 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 11:51:04.453295    5437 kubeadm.go:310] 
	I0915 11:51:04.453340    5437 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 19ou2y.372pn0rn1zo0hpgd \
	I0915 11:51:04.453455    5437 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd \
	I0915 11:51:04.453473    5437 kubeadm.go:310] 	--control-plane 
	I0915 11:51:04.453478    5437 kubeadm.go:310] 
	I0915 11:51:04.453596    5437 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 11:51:04.453601    5437 kubeadm.go:310] 
	I0915 11:51:04.453648    5437 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 19ou2y.372pn0rn1zo0hpgd \
	I0915 11:51:04.453815    5437 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd 
	I0915 11:51:04.453907    5437 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 11:51:04.453913    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:51:04.453922    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:51:04.457392    5437 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 11:51:04.465400    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 11:51:04.468527    5437 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 11:51:04.473483    5437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 11:51:04.473559    5437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 11:51:04.473651    5437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-515000 minikube.k8s.io/updated_at=2024_09_15T11_51_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=stopped-upgrade-515000 minikube.k8s.io/primary=true
	I0915 11:51:04.478719    5437 ops.go:34] apiserver oom_adj: -16
	I0915 11:51:04.510378    5437 kubeadm.go:1113] duration metric: took 36.859333ms to wait for elevateKubeSystemPrivileges
	I0915 11:51:04.515356    5437 kubeadm.go:394] duration metric: took 4m11.797980792s to StartCluster
	I0915 11:51:04.515372    5437 settings.go:142] acquiring lock: {Name:mke41fab1fd2ef0229fde23400affd11462eeb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:51:04.515462    5437 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:51:04.515916    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:51:04.516145    5437 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:51:04.516155    5437 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 11:51:04.516224    5437 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-515000"
	I0915 11:51:04.516226    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:51:04.516233    5437 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-515000"
	W0915 11:51:04.516236    5437 addons.go:243] addon storage-provisioner should already be in state true
	I0915 11:51:04.516250    5437 host.go:66] Checking if "stopped-upgrade-515000" exists ...
	I0915 11:51:04.516257    5437 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-515000"
	I0915 11:51:04.516266    5437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-515000"
	I0915 11:51:04.517157    5437 kapi.go:59] client config for stopped-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104435800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:51:04.517273    5437 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-515000"
	W0915 11:51:04.517278    5437 addons.go:243] addon default-storageclass should already be in state true
	I0915 11:51:04.517284    5437 host.go:66] Checking if "stopped-upgrade-515000" exists ...
	I0915 11:51:04.520405    5437 out.go:177] * Verifying Kubernetes components...
	I0915 11:51:04.520747    5437 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 11:51:04.523446    5437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 11:51:04.523454    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:51:04.527324    5437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:51:04.047438    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:04.047565    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:04.058531    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:04.058610    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:04.070021    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:04.070104    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:04.084006    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:04.084092    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:04.095631    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:04.095721    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:04.106953    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:04.107045    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:04.117748    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:04.117835    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:04.128750    5283 logs.go:276] 0 containers: []
	W0915 11:51:04.128763    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:04.128839    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:04.139866    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:04.139885    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:04.139893    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:04.175842    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:04.175854    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:04.190594    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:04.190604    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:04.202827    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:04.202838    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:04.214798    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:04.214809    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:04.232972    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:04.232985    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:04.246895    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:04.246908    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:04.260461    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:04.260474    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:04.278728    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:04.278741    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:04.291036    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:04.291056    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:04.295727    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:04.295739    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:04.311275    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:04.311290    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:04.336980    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:04.336997    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:04.375578    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:04.375593    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:04.391036    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:04.391048    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:06.905117    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:04.531357    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:51:04.535357    5437 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:51:04.535364    5437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 11:51:04.535370    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:51:04.601110    5437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:51:04.606380    5437 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:51:04.606432    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:51:04.610882    5437 api_server.go:72] duration metric: took 94.726208ms to wait for apiserver process to appear ...
	I0915 11:51:04.610891    5437 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:51:04.610897    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:04.630981    5437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:51:04.673366    5437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 11:51:04.994538    5437 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 11:51:04.994550    5437 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 11:51:11.905651    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:11.905817    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:11.918708    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:11.918796    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:11.929552    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:11.929632    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:11.948278    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:11.948367    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:11.959186    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:11.959273    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:11.969580    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:11.969661    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:11.980742    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:11.980827    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:11.991349    5283 logs.go:276] 0 containers: []
	W0915 11:51:11.991360    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:11.991431    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:12.001573    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:12.001590    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:12.001597    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:12.038558    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:12.038572    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:12.050620    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:12.050632    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:12.065672    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:12.065687    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:12.083163    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:12.083177    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:12.106303    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:12.106313    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:12.139852    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:12.139860    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:12.151349    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:12.151361    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:12.163144    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:12.163155    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:12.174724    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:12.174736    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:12.186621    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:12.186632    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:12.201001    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:12.201012    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:12.212739    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:12.212755    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:12.223887    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:12.223896    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:12.228473    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:12.228483    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:09.613064    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:09.613136    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:14.743334    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:14.613651    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:14.613704    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:19.745038    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:19.745178    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:51:19.758487    5283 logs.go:276] 1 containers: [9c6f5acbdc80]
	I0915 11:51:19.758585    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:51:19.775192    5283 logs.go:276] 1 containers: [765a972118c3]
	I0915 11:51:19.775279    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:51:19.786242    5283 logs.go:276] 4 containers: [cb2cf0c6e95a b928d4bef963 ef117a7c0f4a 31a36fe7f586]
	I0915 11:51:19.786322    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:51:19.796929    5283 logs.go:276] 1 containers: [6f7a53bb93e2]
	I0915 11:51:19.796995    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:51:19.807569    5283 logs.go:276] 1 containers: [f8efd9dbeaba]
	I0915 11:51:19.807647    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:51:19.818900    5283 logs.go:276] 1 containers: [1b8c1a0bbd7b]
	I0915 11:51:19.818982    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:51:19.829711    5283 logs.go:276] 0 containers: []
	W0915 11:51:19.829729    5283 logs.go:278] No container was found matching "kindnet"
	I0915 11:51:19.829802    5283 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:51:19.844658    5283 logs.go:276] 1 containers: [1e1faae7d659]
	I0915 11:51:19.844676    5283 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:51:19.844682    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:51:19.879206    5283 logs.go:123] Gathering logs for coredns [cb2cf0c6e95a] ...
	I0915 11:51:19.879216    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cb2cf0c6e95a"
	I0915 11:51:19.891365    5283 logs.go:123] Gathering logs for container status ...
	I0915 11:51:19.891376    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:51:19.903058    5283 logs.go:123] Gathering logs for kubelet ...
	I0915 11:51:19.903073    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:51:19.936714    5283 logs.go:123] Gathering logs for storage-provisioner [1e1faae7d659] ...
	I0915 11:51:19.936725    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e1faae7d659"
	I0915 11:51:19.947992    5283 logs.go:123] Gathering logs for Docker ...
	I0915 11:51:19.948008    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:51:19.970544    5283 logs.go:123] Gathering logs for dmesg ...
	I0915 11:51:19.970553    5283 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:51:19.975315    5283 logs.go:123] Gathering logs for etcd [765a972118c3] ...
	I0915 11:51:19.975323    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 765a972118c3"
	I0915 11:51:19.988806    5283 logs.go:123] Gathering logs for coredns [b928d4bef963] ...
	I0915 11:51:19.988815    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b928d4bef963"
	I0915 11:51:20.009436    5283 logs.go:123] Gathering logs for kube-scheduler [6f7a53bb93e2] ...
	I0915 11:51:20.009450    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7a53bb93e2"
	I0915 11:51:20.023837    5283 logs.go:123] Gathering logs for kube-apiserver [9c6f5acbdc80] ...
	I0915 11:51:20.023848    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c6f5acbdc80"
	I0915 11:51:20.038157    5283 logs.go:123] Gathering logs for coredns [ef117a7c0f4a] ...
	I0915 11:51:20.038170    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef117a7c0f4a"
	I0915 11:51:20.050556    5283 logs.go:123] Gathering logs for coredns [31a36fe7f586] ...
	I0915 11:51:20.050571    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31a36fe7f586"
	I0915 11:51:20.062681    5283 logs.go:123] Gathering logs for kube-proxy [f8efd9dbeaba] ...
	I0915 11:51:20.062697    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f8efd9dbeaba"
	I0915 11:51:20.074689    5283 logs.go:123] Gathering logs for kube-controller-manager [1b8c1a0bbd7b] ...
	I0915 11:51:20.074700    5283 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b8c1a0bbd7b"
	I0915 11:51:22.594987    5283 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:19.614072    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:19.614115    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:27.597299    5283 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:27.600621    5283 out.go:201] 
	W0915 11:51:27.603622    5283 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0915 11:51:27.603627    5283 out.go:270] * 
	W0915 11:51:27.604048    5283 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:51:27.615426    5283 out.go:201] 
	I0915 11:51:24.614685    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:24.614709    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:29.615382    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:29.615434    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:34.616327    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:34.616366    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0915 11:51:34.996787    5437 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0915 11:51:35.000031    5437 out.go:177] * Enabled addons: storage-provisioner
	I0915 11:51:35.011949    5437 addons.go:510] duration metric: took 30.495936834s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Sun 2024-09-15 18:42:38 UTC, ends at Sun 2024-09-15 18:51:43 UTC. --
	Sep 15 18:51:27 running-upgrade-196000 dockerd[3149]: time="2024-09-15T18:51:27.759844310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 15 18:51:27 running-upgrade-196000 dockerd[3149]: time="2024-09-15T18:51:27.759908099Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/81101de5be2ec21a97996d848b1cbc16848cd226c474056e110fc26f9067ad6c pid=18656 runtime=io.containerd.runc.v2
	Sep 15 18:51:27 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:27Z" level=error msg="ContainerStats resp: {0x40000b9280 linux}"
	Sep 15 18:51:27 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:27Z" level=error msg="ContainerStats resp: {0x40009b92c0 linux}"
	Sep 15 18:51:28 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 18:51:28 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:28Z" level=error msg="ContainerStats resp: {0x4000aaba40 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x40007ec040 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x4000414040 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x400089a340 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x400089a740 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x400089ab80 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x400089af80 linux}"
	Sep 15 18:51:29 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:29Z" level=error msg="ContainerStats resp: {0x40007ed900 linux}"
	Sep 15 18:51:33 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 18:51:38 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 15 18:51:40 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:40Z" level=error msg="ContainerStats resp: {0x4000612800 linux}"
	Sep 15 18:51:40 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:40Z" level=error msg="ContainerStats resp: {0x4000613440 linux}"
	Sep 15 18:51:41 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:41Z" level=error msg="ContainerStats resp: {0x40007ec200 linux}"
	Sep 15 18:51:41 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:41Z" level=error msg="ContainerStats resp: {0x40007ec880 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x40007ed380 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x40007ed540 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x4000aabd00 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x40000b9140 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x40000b9740 linux}"
	Sep 15 18:51:42 running-upgrade-196000 cri-dockerd[2986]: time="2024-09-15T18:51:42Z" level=error msg="ContainerStats resp: {0x40000b9ac0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	81101de5be2ec       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   4e5f62ca0f202
	729fd37beadf9       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   5df1d49066c90
	cb2cf0c6e95a6       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5df1d49066c90
	b928d4bef963e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   4e5f62ca0f202
	f8efd9dbeabaa       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   d3a5adf91d69a
	1e1faae7d659a       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   13933fe3ab6d7
	765a972118c3f       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0dd9f7e3aa73f
	6f7a53bb93e27       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   4976ab1523b47
	1b8c1a0bbd7b2       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   cb108d6ea16f5
	9c6f5acbdc806       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   61e3203b62fbe
	
	
	==> coredns [729fd37beadf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7716702137839760738.2846421032604883448. HINFO: read udp 10.244.0.3:47838->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7716702137839760738.2846421032604883448. HINFO: read udp 10.244.0.3:56880->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7716702137839760738.2846421032604883448. HINFO: read udp 10.244.0.3:48599->10.0.2.3:53: i/o timeout
	
	
	==> coredns [81101de5be2e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4940451276891020145.2511502160907745592. HINFO: read udp 10.244.0.2:33998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4940451276891020145.2511502160907745592. HINFO: read udp 10.244.0.2:59145->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4940451276891020145.2511502160907745592. HINFO: read udp 10.244.0.2:57468->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b928d4bef963] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:43380->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:35059->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:33246->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:33116->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:38129->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:35695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:49100->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5461060607247057090.511876134512881414. HINFO: read udp 10.244.0.2:55847->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cb2cf0c6e95a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:36557->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:52867->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:34003->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:54973->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:40207->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:42852->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:60654->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:44598->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:36534->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6557729714492114938.8721413580951622685. HINFO: read udp 10.244.0.3:34249->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-196000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-196000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673
	                    minikube.k8s.io/name=running-upgrade-196000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T11_47_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 18:47:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-196000
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 18:51:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 18:47:26 +0000   Sun, 15 Sep 2024 18:47:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 18:47:26 +0000   Sun, 15 Sep 2024 18:47:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 18:47:26 +0000   Sun, 15 Sep 2024 18:47:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 18:47:26 +0000   Sun, 15 Sep 2024 18:47:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-196000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a7b4113568a43fd95b2cf276b2abb09
	  System UUID:                3a7b4113568a43fd95b2cf276b2abb09
	  Boot ID:                    43012866-3dbc-456d-a33c-91180bd7a769
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-687zl                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-7kbtr                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-196000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-196000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-running-upgrade-196000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-98tq4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-196000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-196000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-196000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-196000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-196000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-196000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-196000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-196000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-196000 event: Registered Node running-upgrade-196000 in Controller
	
	
	==> dmesg <==
	[  +1.640346] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.074712] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.080404] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +1.139651] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.089942] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.074047] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.161436] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[Sep15 18:43] systemd-fstab-generator[1808]: Ignoring "noauto" for root device
	[  +2.566212] systemd-fstab-generator[2166]: Ignoring "noauto" for root device
	[  +0.142171] systemd-fstab-generator[2200]: Ignoring "noauto" for root device
	[  +0.100470] systemd-fstab-generator[2214]: Ignoring "noauto" for root device
	[  +0.093897] systemd-fstab-generator[2229]: Ignoring "noauto" for root device
	[  +2.532410] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.170492] systemd-fstab-generator[2943]: Ignoring "noauto" for root device
	[  +0.082285] systemd-fstab-generator[2954]: Ignoring "noauto" for root device
	[  +0.075673] systemd-fstab-generator[2965]: Ignoring "noauto" for root device
	[  +0.101171] systemd-fstab-generator[2979]: Ignoring "noauto" for root device
	[  +2.275339] systemd-fstab-generator[3136]: Ignoring "noauto" for root device
	[  +3.952550] systemd-fstab-generator[3506]: Ignoring "noauto" for root device
	[  +1.211830] systemd-fstab-generator[3830]: Ignoring "noauto" for root device
	[ +18.990244] kauditd_printk_skb: 68 callbacks suppressed
	[Sep15 18:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.355257] systemd-fstab-generator[11806]: Ignoring "noauto" for root device
	[  +6.127626] systemd-fstab-generator[12420]: Ignoring "noauto" for root device
	[  +0.466173] systemd-fstab-generator[12556]: Ignoring "noauto" for root device
	
	
	==> etcd [765a972118c3] <==
	{"level":"info","ts":"2024-09-15T18:47:21.510Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T18:47:21.510Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T18:47:21.505Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-15T18:47:21.510Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-15T18:47:21.506Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-15T18:47:21.510Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-15T18:47:21.510Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-15T18:47:22.491Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:47:22.492Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:47:22.492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:47:22.492Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T18:47:22.492Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-196000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T18:47:22.492Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:47:22.493Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T18:47:22.493Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T18:47:22.494Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T18:47:22.494Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T18:47:22.494Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 18:51:43 up 9 min,  0 users,  load average: 0.29, 0.24, 0.12
	Linux running-upgrade-196000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9c6f5acbdc80] <==
	I0915 18:47:23.707177       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0915 18:47:23.729758       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0915 18:47:23.731614       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0915 18:47:23.731743       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0915 18:47:23.731893       1 cache.go:39] Caches are synced for autoregister controller
	I0915 18:47:23.732430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 18:47:23.739385       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0915 18:47:24.450329       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0915 18:47:24.617181       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0915 18:47:24.620511       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0915 18:47:24.620535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 18:47:24.755497       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0915 18:47:24.769398       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0915 18:47:24.875207       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0915 18:47:24.877152       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0915 18:47:24.877515       1 controller.go:611] quota admission added evaluator for: endpoints
	I0915 18:47:24.878812       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0915 18:47:25.737673       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0915 18:47:26.414526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0915 18:47:26.423557       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0915 18:47:26.449964       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0915 18:47:26.467681       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 18:47:39.245306       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0915 18:47:39.345979       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0915 18:47:40.076540       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [1b8c1a0bbd7b] <==
	I0915 18:47:38.586955       1 shared_informer.go:262] Caches are synced for expand
	I0915 18:47:38.588020       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0915 18:47:38.588062       1 shared_informer.go:262] Caches are synced for crt configmap
	I0915 18:47:38.589174       1 shared_informer.go:262] Caches are synced for TTL
	I0915 18:47:38.594453       1 shared_informer.go:262] Caches are synced for taint
	I0915 18:47:38.594461       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0915 18:47:38.594483       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0915 18:47:38.594503       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-196000. Assuming now as a timestamp.
	I0915 18:47:38.594518       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0915 18:47:38.594542       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0915 18:47:38.594595       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0915 18:47:38.594604       1 event.go:294] "Event occurred" object="running-upgrade-196000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-196000 event: Registered Node running-upgrade-196000 in Controller"
	I0915 18:47:38.661302       1 shared_informer.go:262] Caches are synced for disruption
	I0915 18:47:38.661337       1 disruption.go:371] Sending events to api server.
	I0915 18:47:38.688194       1 shared_informer.go:262] Caches are synced for stateful set
	I0915 18:47:38.695528       1 shared_informer.go:262] Caches are synced for attach detach
	I0915 18:47:38.752113       1 shared_informer.go:262] Caches are synced for resource quota
	I0915 18:47:38.796545       1 shared_informer.go:262] Caches are synced for resource quota
	I0915 18:47:39.213623       1 shared_informer.go:262] Caches are synced for garbage collector
	I0915 18:47:39.247800       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-98tq4"
	I0915 18:47:39.259626       1 shared_informer.go:262] Caches are synced for garbage collector
	I0915 18:47:39.259703       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0915 18:47:39.347248       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0915 18:47:39.599169       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7kbtr"
	I0915 18:47:39.603271       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-687zl"
	
	
	==> kube-proxy [f8efd9dbeaba] <==
	I0915 18:47:40.065073       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0915 18:47:40.065118       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0915 18:47:40.065128       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0915 18:47:40.074649       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0915 18:47:40.074731       1 server_others.go:206] "Using iptables Proxier"
	I0915 18:47:40.074764       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0915 18:47:40.074886       1 server.go:661] "Version info" version="v1.24.1"
	I0915 18:47:40.074893       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 18:47:40.075339       1 config.go:317] "Starting service config controller"
	I0915 18:47:40.075362       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0915 18:47:40.075389       1 config.go:226] "Starting endpoint slice config controller"
	I0915 18:47:40.075402       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0915 18:47:40.075702       1 config.go:444] "Starting node config controller"
	I0915 18:47:40.075722       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0915 18:47:40.175438       1 shared_informer.go:262] Caches are synced for service config
	I0915 18:47:40.175446       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0915 18:47:40.175751       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [6f7a53bb93e2] <==
	W0915 18:47:23.667899       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 18:47:23.667903       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0915 18:47:23.667922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 18:47:23.667929       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0915 18:47:23.667945       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 18:47:23.667949       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0915 18:47:23.667960       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 18:47:23.667963       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0915 18:47:23.667974       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 18:47:23.667977       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0915 18:47:23.667989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 18:47:23.667992       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0915 18:47:23.668004       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 18:47:23.668012       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0915 18:47:24.512027       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 18:47:24.512049       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0915 18:47:24.611478       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 18:47:24.611676       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0915 18:47:24.615870       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 18:47:24.615906       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0915 18:47:24.630033       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 18:47:24.630129       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0915 18:47:24.664064       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 18:47:24.664158       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0915 18:47:25.264243       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sun 2024-09-15 18:42:38 UTC, ends at Sun 2024-09-15 18:51:43 UTC. --
	Sep 15 18:47:28 running-upgrade-196000 kubelet[12426]: I0915 18:47:28.646093   12426 request.go:601] Waited for 1.11829042s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Sep 15 18:47:28 running-upgrade-196000 kubelet[12426]: E0915 18:47:28.648506   12426 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-196000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-196000"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: I0915 18:47:38.573345   12426 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: I0915 18:47:38.573579   12426 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: I0915 18:47:38.600594   12426 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: I0915 18:47:38.673580   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d03164b6-ca91-443f-922d-87f78ba14e2f-tmp\") pod \"storage-provisioner\" (UID: \"d03164b6-ca91-443f-922d-87f78ba14e2f\") " pod="kube-system/storage-provisioner"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: I0915 18:47:38.673642   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkj7g\" (UniqueName: \"kubernetes.io/projected/d03164b6-ca91-443f-922d-87f78ba14e2f-kube-api-access-zkj7g\") pod \"storage-provisioner\" (UID: \"d03164b6-ca91-443f-922d-87f78ba14e2f\") " pod="kube-system/storage-provisioner"
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: E0915 18:47:38.914041   12426 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: E0915 18:47:38.914072   12426 projected.go:192] Error preparing data for projected volume kube-api-access-zkj7g for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 15 18:47:38 running-upgrade-196000 kubelet[12426]: E0915 18:47:38.914125   12426 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d03164b6-ca91-443f-922d-87f78ba14e2f-kube-api-access-zkj7g podName:d03164b6-ca91-443f-922d-87f78ba14e2f nodeName:}" failed. No retries permitted until 2024-09-15 18:47:39.414104762 +0000 UTC m=+13.010303108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zkj7g" (UniqueName: "kubernetes.io/projected/d03164b6-ca91-443f-922d-87f78ba14e2f-kube-api-access-zkj7g") pod "storage-provisioner" (UID: "d03164b6-ca91-443f-922d-87f78ba14e2f") : configmap "kube-root-ca.crt" not found
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.252393   12426 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.280689   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af7d5ca0-879a-42bb-ac2a-5202deb679c5-kube-proxy\") pod \"kube-proxy-98tq4\" (UID: \"af7d5ca0-879a-42bb-ac2a-5202deb679c5\") " pod="kube-system/kube-proxy-98tq4"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.280710   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7d5ca0-879a-42bb-ac2a-5202deb679c5-xtables-lock\") pod \"kube-proxy-98tq4\" (UID: \"af7d5ca0-879a-42bb-ac2a-5202deb679c5\") " pod="kube-system/kube-proxy-98tq4"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.280721   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvwz\" (UniqueName: \"kubernetes.io/projected/af7d5ca0-879a-42bb-ac2a-5202deb679c5-kube-api-access-gxvwz\") pod \"kube-proxy-98tq4\" (UID: \"af7d5ca0-879a-42bb-ac2a-5202deb679c5\") " pod="kube-system/kube-proxy-98tq4"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.280738   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7d5ca0-879a-42bb-ac2a-5202deb679c5-lib-modules\") pod \"kube-proxy-98tq4\" (UID: \"af7d5ca0-879a-42bb-ac2a-5202deb679c5\") " pod="kube-system/kube-proxy-98tq4"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.603926   12426 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.607257   12426 topology_manager.go:200] "Topology Admit Handler"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.647467   12426 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="13933fe3ab6d7cf69cafc63f31ae6449ea33295ad2f7a96699c4f59d84a89680"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.786861   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fc2\" (UniqueName: \"kubernetes.io/projected/aa482b2a-9bd4-486e-b813-a76aa7b0ab25-kube-api-access-76fc2\") pod \"coredns-6d4b75cb6d-687zl\" (UID: \"aa482b2a-9bd4-486e-b813-a76aa7b0ab25\") " pod="kube-system/coredns-6d4b75cb6d-687zl"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.786899   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mngh\" (UniqueName: \"kubernetes.io/projected/45ead0b3-d373-4802-849f-16a345c9e3e3-kube-api-access-4mngh\") pod \"coredns-6d4b75cb6d-7kbtr\" (UID: \"45ead0b3-d373-4802-849f-16a345c9e3e3\") " pod="kube-system/coredns-6d4b75cb6d-7kbtr"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.786947   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa482b2a-9bd4-486e-b813-a76aa7b0ab25-config-volume\") pod \"coredns-6d4b75cb6d-687zl\" (UID: \"aa482b2a-9bd4-486e-b813-a76aa7b0ab25\") " pod="kube-system/coredns-6d4b75cb6d-687zl"
	Sep 15 18:47:39 running-upgrade-196000 kubelet[12426]: I0915 18:47:39.786962   12426 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/45ead0b3-d373-4802-849f-16a345c9e3e3-config-volume\") pod \"coredns-6d4b75cb6d-7kbtr\" (UID: \"45ead0b3-d373-4802-849f-16a345c9e3e3\") " pod="kube-system/coredns-6d4b75cb6d-7kbtr"
	Sep 15 18:47:40 running-upgrade-196000 kubelet[12426]: I0915 18:47:40.668179   12426 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4e5f62ca0f202870ab7edcf67e26ffd474905646cce597cf3bc04a71dca3b527"
	Sep 15 18:51:27 running-upgrade-196000 kubelet[12426]: I0915 18:51:27.865040   12426 scope.go:110] "RemoveContainer" containerID="31a36fe7f58625e911ee046c59a546424e7f732ff29a47425fae9b4fb9b724a2"
	Sep 15 18:51:27 running-upgrade-196000 kubelet[12426]: I0915 18:51:27.881095   12426 scope.go:110] "RemoveContainer" containerID="ef117a7c0f4a7ebe8ace04217fa89c6c1a64eada4c7a53834dc025b11375d5dd"
	
	
	==> storage-provisioner [1e1faae7d659] <==
	I0915 18:47:39.717373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 18:47:39.722679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 18:47:39.722737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 18:47:39.725694       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 18:47:39.726012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31f927d2-a0e2-44e8-ad45-1f2c65418179", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-196000_88dffa30-3263-4260-a1f7-624c1bd4d3b6 became leader
	I0915 18:47:39.726074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-196000_88dffa30-3263-4260-a1f7-624c1bd4d3b6!
	I0915 18:47:39.828695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-196000_88dffa30-3263-4260-a1f7-624c1bd4d3b6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-196000 -n running-upgrade-196000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-196000 -n running-upgrade-196000: exit status 2 (15.685620375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-196000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-196000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-196000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-196000: (1.194827042s)
--- FAIL: TestRunningBinaryUpgrade (588.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.804383083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-902000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-902000" primary control-plane node in "kubernetes-upgrade-902000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-902000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:45:12.554329    5353 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:45:12.554462    5353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:45:12.554465    5353 out.go:358] Setting ErrFile to fd 2...
	I0915 11:45:12.554468    5353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:45:12.554611    5353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:45:12.555710    5353 out.go:352] Setting JSON to false
	I0915 11:45:12.572426    5353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4475,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:45:12.572489    5353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:45:12.577950    5353 out.go:177] * [kubernetes-upgrade-902000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:45:12.584994    5353 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:45:12.585030    5353 notify.go:220] Checking for updates...
	I0915 11:45:12.591937    5353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:45:12.594984    5353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:45:12.597927    5353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:45:12.600970    5353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:45:12.603982    5353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:45:12.607188    5353 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:45:12.607255    5353 config.go:182] Loaded profile config "running-upgrade-196000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:45:12.607307    5353 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:45:12.610938    5353 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:45:12.617914    5353 start.go:297] selected driver: qemu2
	I0915 11:45:12.617920    5353 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:45:12.617926    5353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:45:12.620152    5353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:45:12.622959    5353 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:45:12.626039    5353 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 11:45:12.626052    5353 cni.go:84] Creating CNI manager for ""
	I0915 11:45:12.626073    5353 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 11:45:12.626103    5353 start.go:340] cluster config:
	{Name:kubernetes-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:45:12.629524    5353 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:45:12.636945    5353 out.go:177] * Starting "kubernetes-upgrade-902000" primary control-plane node in "kubernetes-upgrade-902000" cluster
	I0915 11:45:12.640958    5353 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 11:45:12.640974    5353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 11:45:12.640990    5353 cache.go:56] Caching tarball of preloaded images
	I0915 11:45:12.641048    5353 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:45:12.641054    5353 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 11:45:12.641119    5353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kubernetes-upgrade-902000/config.json ...
	I0915 11:45:12.641130    5353 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kubernetes-upgrade-902000/config.json: {Name:mka0266937130c58f0458b5dab2fc162a6fe22ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:45:12.641473    5353 start.go:360] acquireMachinesLock for kubernetes-upgrade-902000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:45:12.641506    5353 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "kubernetes-upgrade-902000"
	I0915 11:45:12.641516    5353 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:45:12.641539    5353 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:45:12.644992    5353 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:45:12.660841    5353 start.go:159] libmachine.API.Create for "kubernetes-upgrade-902000" (driver="qemu2")
	I0915 11:45:12.660865    5353 client.go:168] LocalClient.Create starting
	I0915 11:45:12.660922    5353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:45:12.660954    5353 main.go:141] libmachine: Decoding PEM data...
	I0915 11:45:12.660963    5353 main.go:141] libmachine: Parsing certificate...
	I0915 11:45:12.661006    5353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:45:12.661028    5353 main.go:141] libmachine: Decoding PEM data...
	I0915 11:45:12.661042    5353 main.go:141] libmachine: Parsing certificate...
	I0915 11:45:12.661455    5353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:45:12.822228    5353 main.go:141] libmachine: Creating SSH key...
	I0915 11:45:12.885596    5353 main.go:141] libmachine: Creating Disk image...
	I0915 11:45:12.885604    5353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:45:12.885775    5353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:12.895134    5353 main.go:141] libmachine: STDOUT: 
	I0915 11:45:12.895150    5353 main.go:141] libmachine: STDERR: 
	I0915 11:45:12.895206    5353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2 +20000M
	I0915 11:45:12.903404    5353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:45:12.903423    5353 main.go:141] libmachine: STDERR: 
	I0915 11:45:12.903441    5353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:12.903451    5353 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:45:12.903464    5353 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:45:12.903497    5353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:d6:bc:0f:0f:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:12.905151    5353 main.go:141] libmachine: STDOUT: 
	I0915 11:45:12.905168    5353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:45:12.905188    5353 client.go:171] duration metric: took 244.321167ms to LocalClient.Create
	I0915 11:45:14.907253    5353 start.go:128] duration metric: took 2.265731583s to createHost
	I0915 11:45:14.907311    5353 start.go:83] releasing machines lock for "kubernetes-upgrade-902000", held for 2.265827666s
	W0915 11:45:14.907343    5353 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:45:14.920964    5353 out.go:177] * Deleting "kubernetes-upgrade-902000" in qemu2 ...
	W0915 11:45:14.932842    5353 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:45:14.932849    5353 start.go:729] Will try again in 5 seconds ...
	I0915 11:45:19.934919    5353 start.go:360] acquireMachinesLock for kubernetes-upgrade-902000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:45:19.935031    5353 start.go:364] duration metric: took 87.667µs to acquireMachinesLock for "kubernetes-upgrade-902000"
	I0915 11:45:19.935047    5353 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:45:19.935079    5353 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:45:19.944727    5353 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:45:19.962413    5353 start.go:159] libmachine.API.Create for "kubernetes-upgrade-902000" (driver="qemu2")
	I0915 11:45:19.962451    5353 client.go:168] LocalClient.Create starting
	I0915 11:45:19.962532    5353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:45:19.962571    5353 main.go:141] libmachine: Decoding PEM data...
	I0915 11:45:19.962580    5353 main.go:141] libmachine: Parsing certificate...
	I0915 11:45:19.962616    5353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:45:19.962642    5353 main.go:141] libmachine: Decoding PEM data...
	I0915 11:45:19.962648    5353 main.go:141] libmachine: Parsing certificate...
	I0915 11:45:19.962971    5353 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:45:20.122573    5353 main.go:141] libmachine: Creating SSH key...
	I0915 11:45:20.262951    5353 main.go:141] libmachine: Creating Disk image...
	I0915 11:45:20.262964    5353 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:45:20.263175    5353 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:20.272792    5353 main.go:141] libmachine: STDOUT: 
	I0915 11:45:20.272811    5353 main.go:141] libmachine: STDERR: 
	I0915 11:45:20.272882    5353 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2 +20000M
	I0915 11:45:20.280946    5353 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:45:20.280963    5353 main.go:141] libmachine: STDERR: 
	I0915 11:45:20.280975    5353 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:20.280990    5353 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:45:20.281006    5353 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:45:20.281034    5353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:14:b4:cb:66:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:20.282775    5353 main.go:141] libmachine: STDOUT: 
	I0915 11:45:20.282788    5353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:45:20.282804    5353 client.go:171] duration metric: took 320.352417ms to LocalClient.Create
	I0915 11:45:22.285009    5353 start.go:128] duration metric: took 2.349919834s to createHost
	I0915 11:45:22.285179    5353 start.go:83] releasing machines lock for "kubernetes-upgrade-902000", held for 2.350136042s
	W0915 11:45:22.285607    5353 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-902000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:45:22.301319    5353 out.go:201] 
	W0915 11:45:22.303057    5353 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:45:22.303088    5353 out.go:270] * 
	* 
	W0915 11:45:22.305609    5353 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:45:22.316341    5353 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-902000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-902000: (3.280805625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-902000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-902000 status --format={{.Host}}: exit status 7 (49.098833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.18668625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-902000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-902000" primary control-plane node in "kubernetes-upgrade-902000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-902000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:45:25.690981    5389 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:45:25.691115    5389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:45:25.691118    5389 out.go:358] Setting ErrFile to fd 2...
	I0915 11:45:25.691121    5389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:45:25.691261    5389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:45:25.692284    5389 out.go:352] Setting JSON to false
	I0915 11:45:25.708769    5389 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4488,"bootTime":1726421437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:45:25.708846    5389 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:45:25.713430    5389 out.go:177] * [kubernetes-upgrade-902000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:45:25.722301    5389 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:45:25.722370    5389 notify.go:220] Checking for updates...
	I0915 11:45:25.729302    5389 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:45:25.732345    5389 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:45:25.735386    5389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:45:25.738347    5389 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:45:25.741405    5389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:45:25.744619    5389 config.go:182] Loaded profile config "kubernetes-upgrade-902000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0915 11:45:25.744868    5389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:45:25.749386    5389 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:45:25.756349    5389 start.go:297] selected driver: qemu2
	I0915 11:45:25.756359    5389 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-902000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:45:25.756414    5389 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:45:25.758779    5389 cni.go:84] Creating CNI manager for ""
	I0915 11:45:25.758811    5389 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:45:25.758839    5389 start.go:340] cluster config:
	{Name:kubernetes-upgrade-902000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-902000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:45:25.762208    5389 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:45:25.769280    5389 out.go:177] * Starting "kubernetes-upgrade-902000" primary control-plane node in "kubernetes-upgrade-902000" cluster
	I0915 11:45:25.773387    5389 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:45:25.773403    5389 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:45:25.773412    5389 cache.go:56] Caching tarball of preloaded images
	I0915 11:45:25.773473    5389 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:45:25.773478    5389 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:45:25.773528    5389 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kubernetes-upgrade-902000/config.json ...
	I0915 11:45:25.774088    5389 start.go:360] acquireMachinesLock for kubernetes-upgrade-902000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:45:25.774120    5389 start.go:364] duration metric: took 25.416µs to acquireMachinesLock for "kubernetes-upgrade-902000"
	I0915 11:45:25.774128    5389 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:45:25.774133    5389 fix.go:54] fixHost starting: 
	I0915 11:45:25.774246    5389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-902000: state=Stopped err=<nil>
	W0915 11:45:25.774254    5389 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:45:25.778442    5389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-902000" ...
	I0915 11:45:25.786319    5389 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:45:25.786351    5389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:14:b4:cb:66:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:25.788238    5389 main.go:141] libmachine: STDOUT: 
	I0915 11:45:25.788255    5389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:45:25.788284    5389 fix.go:56] duration metric: took 14.149916ms for fixHost
	I0915 11:45:25.788288    5389 start.go:83] releasing machines lock for "kubernetes-upgrade-902000", held for 14.164041ms
	W0915 11:45:25.788292    5389 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:45:25.788327    5389 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:45:25.788337    5389 start.go:729] Will try again in 5 seconds ...
	I0915 11:45:30.789165    5389 start.go:360] acquireMachinesLock for kubernetes-upgrade-902000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:45:30.789588    5389 start.go:364] duration metric: took 318.25µs to acquireMachinesLock for "kubernetes-upgrade-902000"
	I0915 11:45:30.789648    5389 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:45:30.789664    5389 fix.go:54] fixHost starting: 
	I0915 11:45:30.790271    5389 fix.go:112] recreateIfNeeded on kubernetes-upgrade-902000: state=Stopped err=<nil>
	W0915 11:45:30.790291    5389 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:45:30.799555    5389 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-902000" ...
	I0915 11:45:30.804507    5389 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:45:30.804765    5389 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:14:b4:cb:66:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubernetes-upgrade-902000/disk.qcow2
	I0915 11:45:30.813723    5389 main.go:141] libmachine: STDOUT: 
	I0915 11:45:30.814463    5389 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:45:30.814564    5389 fix.go:56] duration metric: took 24.901875ms for fixHost
	I0915 11:45:30.814581    5389 start.go:83] releasing machines lock for "kubernetes-upgrade-902000", held for 24.972208ms
	W0915 11:45:30.814843    5389 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-902000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:45:30.822584    5389 out.go:201] 
	W0915 11:45:30.825571    5389 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:45:30.825593    5389 out.go:270] * 
	* 
	W0915 11:45:30.827449    5389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:45:30.837596    5389 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-902000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-902000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-902000 version --output=json: exit status 1 (61.07725ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-902000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-15 11:45:30.911357 -0700 PDT m=+2996.763287584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-902000 -n kubernetes-upgrade-902000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-902000 -n kubernetes-upgrade-902000: exit status 7 (32.971375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-902000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-902000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-902000
--- FAIL: TestKubernetesUpgrade (18.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.07s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1630970193/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.07s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.82s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19648
- KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current40267259/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.385131217 start -p stopped-upgrade-515000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.385131217 start -p stopped-upgrade-515000 --memory=2200 --vm-driver=qemu2 : (39.834973541s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.385131217 -p stopped-upgrade-515000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.385131217 -p stopped-upgrade-515000 stop: (12.106726792s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0915 11:47:16.227781    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:49:13.132343    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:49:29.791566    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.926596542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-515000" primary control-plane node in "stopped-upgrade-515000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-515000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:46:23.887982    5437 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:46:23.888117    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:46:23.888121    5437 out.go:358] Setting ErrFile to fd 2...
	I0915 11:46:23.888124    5437 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:46:23.888309    5437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:46:23.889427    5437 out.go:352] Setting JSON to false
	I0915 11:46:23.907918    5437 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4546,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:46:23.907999    5437 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:46:23.911625    5437 out.go:177] * [stopped-upgrade-515000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:46:23.919716    5437 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:46:23.919765    5437 notify.go:220] Checking for updates...
	I0915 11:46:23.926664    5437 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:46:23.928126    5437 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:46:23.931592    5437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:46:23.934626    5437 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:46:23.937672    5437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:46:23.941012    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:46:23.944610    5437 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 11:46:23.947702    5437 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:46:23.951569    5437 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:46:23.958636    5437 start.go:297] selected driver: qemu2
	I0915 11:46:23.958642    5437 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:23.958688    5437 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:46:23.961559    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:46:23.961604    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:46:23.961626    5437 start.go:340] cluster config:
	{Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:23.961687    5437 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:46:23.968625    5437 out.go:177] * Starting "stopped-upgrade-515000" primary control-plane node in "stopped-upgrade-515000" cluster
	I0915 11:46:23.971601    5437 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:46:23.971636    5437 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0915 11:46:23.971644    5437 cache.go:56] Caching tarball of preloaded images
	I0915 11:46:23.971735    5437 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:46:23.971741    5437 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0915 11:46:23.971799    5437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/config.json ...
	I0915 11:46:23.972292    5437 start.go:360] acquireMachinesLock for stopped-upgrade-515000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:46:23.972333    5437 start.go:364] duration metric: took 32.417µs to acquireMachinesLock for "stopped-upgrade-515000"
	I0915 11:46:23.972343    5437 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:46:23.972347    5437 fix.go:54] fixHost starting: 
	I0915 11:46:23.972457    5437 fix.go:112] recreateIfNeeded on stopped-upgrade-515000: state=Stopped err=<nil>
	W0915 11:46:23.972466    5437 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:46:23.980436    5437 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-515000" ...
	I0915 11:46:23.984597    5437 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:46:23.984671    5437 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50515-:22,hostfwd=tcp::50516-:2376,hostname=stopped-upgrade-515000 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/disk.qcow2
	I0915 11:46:24.033062    5437 main.go:141] libmachine: STDOUT: 
	I0915 11:46:24.033106    5437 main.go:141] libmachine: STDERR: 
	I0915 11:46:24.033112    5437 main.go:141] libmachine: Waiting for VM to start (ssh -p 50515 docker@127.0.0.1)...
	I0915 11:46:43.879784    5437 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/config.json ...
	I0915 11:46:43.880724    5437 machine.go:93] provisionDockerMachine start ...
	I0915 11:46:43.881071    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:43.881716    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:43.881736    5437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 11:46:43.970337    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0915 11:46:43.970374    5437 buildroot.go:166] provisioning hostname "stopped-upgrade-515000"
	I0915 11:46:43.970521    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:43.970796    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:43.970811    5437 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-515000 && echo "stopped-upgrade-515000" | sudo tee /etc/hostname
	I0915 11:46:44.056199    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-515000
	
	I0915 11:46:44.056287    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.056457    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.056479    5437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-515000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-515000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-515000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 11:46:44.127535    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 11:46:44.127548    5437 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19648-1650/.minikube CaCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19648-1650/.minikube}
	I0915 11:46:44.127556    5437 buildroot.go:174] setting up certificates
	I0915 11:46:44.127561    5437 provision.go:84] configureAuth start
	I0915 11:46:44.127565    5437 provision.go:143] copyHostCerts
	I0915 11:46:44.127643    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem, removing ...
	I0915 11:46:44.127651    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem
	I0915 11:46:44.127806    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.pem (1078 bytes)
	I0915 11:46:44.128010    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem, removing ...
	I0915 11:46:44.128015    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem
	I0915 11:46:44.128292    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/cert.pem (1123 bytes)
	I0915 11:46:44.128414    5437 exec_runner.go:144] found /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem, removing ...
	I0915 11:46:44.128420    5437 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem
	I0915 11:46:44.128486    5437 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19648-1650/.minikube/key.pem (1679 bytes)
	I0915 11:46:44.128596    5437 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-515000 san=[127.0.0.1 localhost minikube stopped-upgrade-515000]
	I0915 11:46:44.324753    5437 provision.go:177] copyRemoteCerts
	I0915 11:46:44.324810    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 11:46:44.324821    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.361996    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 11:46:44.368797    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0915 11:46:44.375286    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 11:46:44.382126    5437 provision.go:87] duration metric: took 254.557875ms to configureAuth
	I0915 11:46:44.382134    5437 buildroot.go:189] setting minikube options for container-runtime
	I0915 11:46:44.382245    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:46:44.382287    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.382380    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.382386    5437 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 11:46:44.448750    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0915 11:46:44.448761    5437 buildroot.go:70] root file system type: tmpfs
	I0915 11:46:44.448815    5437 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 11:46:44.448880    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.448999    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.449034    5437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 11:46:44.516091    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 11:46:44.516152    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.516313    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.516324    5437 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 11:46:44.850192    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0915 11:46:44.850208    5437 machine.go:96] duration metric: took 969.483833ms to provisionDockerMachine
	I0915 11:46:44.850215    5437 start.go:293] postStartSetup for "stopped-upgrade-515000" (driver="qemu2")
	I0915 11:46:44.850222    5437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 11:46:44.850284    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 11:46:44.850293    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.885586    5437 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 11:46:44.886922    5437 info.go:137] Remote host: Buildroot 2021.02.12
	I0915 11:46:44.886930    5437 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/addons for local assets ...
	I0915 11:46:44.887012    5437 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19648-1650/.minikube/files for local assets ...
	I0915 11:46:44.887105    5437 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem -> 21742.pem in /etc/ssl/certs
	I0915 11:46:44.887211    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0915 11:46:44.890319    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:46:44.897460    5437 start.go:296] duration metric: took 47.239417ms for postStartSetup
	I0915 11:46:44.897473    5437 fix.go:56] duration metric: took 20.925384s for fixHost
	I0915 11:46:44.897521    5437 main.go:141] libmachine: Using SSH client type: native
	I0915 11:46:44.897626    5437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102e5d190] 0x102e5f9d0 <nil>  [] 0s} localhost 50515 <nil> <nil>}
	I0915 11:46:44.897631    5437 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0915 11:46:44.960390    5437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726426005.155784879
	
	I0915 11:46:44.960400    5437 fix.go:216] guest clock: 1726426005.155784879
	I0915 11:46:44.960405    5437 fix.go:229] Guest: 2024-09-15 11:46:45.155784879 -0700 PDT Remote: 2024-09-15 11:46:44.897475 -0700 PDT m=+21.037902418 (delta=258.309879ms)
	I0915 11:46:44.960417    5437 fix.go:200] guest clock delta is within tolerance: 258.309879ms
	I0915 11:46:44.960422    5437 start.go:83] releasing machines lock for "stopped-upgrade-515000", held for 20.988339834s
	I0915 11:46:44.960498    5437 ssh_runner.go:195] Run: cat /version.json
	I0915 11:46:44.960508    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:46:44.960646    5437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 11:46:44.960702    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	W0915 11:46:44.961283    5437 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50515: connect: connection refused
	I0915 11:46:44.961302    5437 retry.go:31] will retry after 278.314216ms: dial tcp [::1]:50515: connect: connection refused
	W0915 11:46:45.291646    5437 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0915 11:46:45.291784    5437 ssh_runner.go:195] Run: systemctl --version
	I0915 11:46:45.295785    5437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0915 11:46:45.299089    5437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0915 11:46:45.299147    5437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0915 11:46:45.304752    5437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0915 11:46:45.312072    5437 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0915 11:46:45.312090    5437 start.go:495] detecting cgroup driver to use...
	I0915 11:46:45.312196    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:46:45.322326    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0915 11:46:45.326582    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 11:46:45.330633    5437 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 11:46:45.330675    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 11:46:45.334399    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:46:45.337861    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 11:46:45.341100    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 11:46:45.344001    5437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 11:46:45.347080    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 11:46:45.350269    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 11:46:45.353529    5437 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 11:46:45.356248    5437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 11:46:45.359166    5437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 11:46:45.362201    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:45.422183    5437 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 11:46:45.428681    5437 start.go:495] detecting cgroup driver to use...
	I0915 11:46:45.428736    5437 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 11:46:45.438258    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:46:45.443125    5437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0915 11:46:45.449216    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0915 11:46:45.453953    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 11:46:45.458625    5437 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0915 11:46:45.506713    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 11:46:45.511731    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 11:46:45.517074    5437 ssh_runner.go:195] Run: which cri-dockerd
	I0915 11:46:45.518368    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 11:46:45.521297    5437 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0915 11:46:45.526315    5437 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 11:46:45.588450    5437 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 11:46:45.648857    5437 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 11:46:45.648925    5437 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 11:46:45.654189    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:45.713017    5437 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:46:46.859958    5437 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.146939208s)
	I0915 11:46:46.860029    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 11:46:46.864930    5437 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0915 11:46:46.871136    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:46:46.876219    5437 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 11:46:46.940090    5437 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 11:46:47.012197    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:47.076282    5437 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 11:46:47.082803    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 11:46:47.088591    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:47.151799    5437 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 11:46:47.193458    5437 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 11:46:47.193548    5437 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 11:46:47.196086    5437 start.go:563] Will wait 60s for crictl version
	I0915 11:46:47.196155    5437 ssh_runner.go:195] Run: which crictl
	I0915 11:46:47.197666    5437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 11:46:47.213238    5437 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0915 11:46:47.213323    5437 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:46:47.230436    5437 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 11:46:47.248966    5437 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0915 11:46:47.249056    5437 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0915 11:46:47.250511    5437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 11:46:47.254765    5437 kubeadm.go:883] updating cluster {Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0915 11:46:47.254821    5437 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0915 11:46:47.254884    5437 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:46:47.265972    5437 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:46:47.265981    5437 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:46:47.266034    5437 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:46:47.269427    5437 ssh_runner.go:195] Run: which lz4
	I0915 11:46:47.270775    5437 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0915 11:46:47.272138    5437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0915 11:46:47.272155    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0915 11:46:48.195281    5437 docker.go:649] duration metric: took 924.554625ms to copy over tarball
	I0915 11:46:48.195350    5437 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0915 11:46:49.351397    5437 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.156047083s)
	I0915 11:46:49.351414    5437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0915 11:46:49.367151    5437 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0915 11:46:49.370675    5437 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0915 11:46:49.375777    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:49.440823    5437 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 11:46:51.152747    5437 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.711927125s)
	I0915 11:46:51.152856    5437 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 11:46:51.165129    5437 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 11:46:51.165139    5437 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0915 11:46:51.165145    5437 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0915 11:46:51.169223    5437 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:51.171290    5437 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.173149    5437 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:51.173179    5437 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.175086    5437 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.175212    5437 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.176438    5437 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.176539    5437 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.177559    5437 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.177563    5437 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.178737    5437 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.179891    5437 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.180152    5437 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.181187    5437 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0915 11:46:51.182585    5437 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.183318    5437 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0915 11:46:51.571741    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.585071    5437 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0915 11:46:51.585105    5437 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.585176    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0915 11:46:51.588497    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.603870    5437 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0915 11:46:51.603900    5437 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.603971    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0915 11:46:51.604121    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0915 11:46:51.617452    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0915 11:46:51.622625    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632823    5437 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0915 11:46:51.632845    5437 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632915    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0915 11:46:51.632917    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.640503    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.646298    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0915 11:46:51.646576    5437 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0915 11:46:51.646594    5437 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.646652    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0915 11:46:51.655121    5437 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0915 11:46:51.655144    5437 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.655222    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0915 11:46:51.662439    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0915 11:46:51.668464    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0915 11:46:51.687622    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0915 11:46:51.694995    5437 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0915 11:46:51.695138    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.697698    5437 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0915 11:46:51.697720    5437 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0915 11:46:51.697772    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0915 11:46:51.711079    5437 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0915 11:46:51.711100    5437 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.711165    5437 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0915 11:46:51.713043    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0915 11:46:51.713159    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0915 11:46:51.722187    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0915 11:46:51.722202    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0915 11:46:51.722213    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0915 11:46:51.722311    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:46:51.724704    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0915 11:46:51.724719    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0915 11:46:51.735856    5437 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0915 11:46:51.735883    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0915 11:46:51.776774    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0915 11:46:51.780797    5437 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0915 11:46:51.780806    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0915 11:46:51.816704    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0915 11:46:52.033673    5437 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0915 11:46:52.033995    5437 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.065685    5437 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0915 11:46:52.065726    5437 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.065858    5437 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:46:52.089909    5437 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0915 11:46:52.090093    5437 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:46:52.092160    5437 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0915 11:46:52.092182    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0915 11:46:52.124669    5437 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0915 11:46:52.124685    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0915 11:46:52.358373    5437 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0915 11:46:52.358412    5437 cache_images.go:92] duration metric: took 1.193265084s to LoadCachedImages
	W0915 11:46:52.358454    5437 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0915 11:46:52.358462    5437 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0915 11:46:52.358509    5437 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-515000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 11:46:52.358600    5437 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 11:46:52.372088    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:46:52.372102    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:46:52.372106    5437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 11:46:52.372116    5437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-515000 NodeName:stopped-upgrade-515000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 11:46:52.372182    5437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-515000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 11:46:52.372243    5437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0915 11:46:52.375217    5437 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 11:46:52.375248    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 11:46:52.378154    5437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0915 11:46:52.382996    5437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 11:46:52.387894    5437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0915 11:46:52.393139    5437 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0915 11:46:52.394224    5437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 11:46:52.398299    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:46:52.456120    5437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:46:52.461670    5437 certs.go:68] Setting up /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000 for IP: 10.0.2.15
	I0915 11:46:52.461682    5437 certs.go:194] generating shared ca certs ...
	I0915 11:46:52.461690    5437 certs.go:226] acquiring lock for ca certs: {Name:mkae14c7548e7e09ac75f5a854dc2935289ebc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.461846    5437 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key
	I0915 11:46:52.461883    5437 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key
	I0915 11:46:52.461888    5437 certs.go:256] generating profile certs ...
	I0915 11:46:52.461947    5437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key
	I0915 11:46:52.461963    5437 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb
	I0915 11:46:52.461972    5437 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0915 11:46:52.572755    5437 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb ...
	I0915 11:46:52.572774    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb: {Name:mkf2e38a464651807a582ee966b82ec0b7cc1e16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.573090    5437 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb ...
	I0915 11:46:52.573094    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb: {Name:mk343596d640a172cbd21cac5c220f0c028bad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.573237    5437 certs.go:381] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt.ffab0dcb -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt
	I0915 11:46:52.573606    5437 certs.go:385] copying /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key.ffab0dcb -> /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key
	I0915 11:46:52.573758    5437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.key
	I0915 11:46:52.573878    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem (1338 bytes)
	W0915 11:46:52.573907    5437 certs.go:480] ignoring /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174_empty.pem, impossibly tiny 0 bytes
	I0915 11:46:52.573926    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 11:46:52.573958    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem (1078 bytes)
	I0915 11:46:52.573979    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem (1123 bytes)
	I0915 11:46:52.573997    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/key.pem (1679 bytes)
	I0915 11:46:52.574056    5437 certs.go:484] found cert: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem (1708 bytes)
	I0915 11:46:52.574373    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 11:46:52.581519    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 11:46:52.588330    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 11:46:52.595454    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 11:46:52.602833    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0915 11:46:52.610962    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 11:46:52.618765    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 11:46:52.625968    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 11:46:52.633068    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/ssl/certs/21742.pem --> /usr/share/ca-certificates/21742.pem (1708 bytes)
	I0915 11:46:52.639640    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 11:46:52.646649    5437 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/2174.pem --> /usr/share/ca-certificates/2174.pem (1338 bytes)
	I0915 11:46:52.653073    5437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 11:46:52.657987    5437 ssh_runner.go:195] Run: openssl version
	I0915 11:46:52.659793    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2174.pem && ln -fs /usr/share/ca-certificates/2174.pem /etc/ssl/certs/2174.pem"
	I0915 11:46:52.663237    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.664836    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 18:11 /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.664867    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2174.pem
	I0915 11:46:52.666514    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2174.pem /etc/ssl/certs/51391683.0"
	I0915 11:46:52.669521    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/21742.pem && ln -fs /usr/share/ca-certificates/21742.pem /etc/ssl/certs/21742.pem"
	I0915 11:46:52.672427    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.673711    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 18:11 /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.673734    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/21742.pem
	I0915 11:46:52.675378    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/21742.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 11:46:52.678725    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 11:46:52.681911    5437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.683289    5437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.683307    5437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 11:46:52.685130    5437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 11:46:52.687931    5437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 11:46:52.689356    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 11:46:52.691116    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 11:46:52.692833    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 11:46:52.694536    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 11:46:52.696390    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 11:46:52.698124    5437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 11:46:52.699967    5437 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-515000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50549 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-515000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0915 11:46:52.700037    5437 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:46:52.710623    5437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 11:46:52.714047    5437 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 11:46:52.714060    5437 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 11:46:52.714085    5437 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 11:46:52.717635    5437 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 11:46:52.717935    5437 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-515000" does not appear in /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:46:52.718034    5437 kubeconfig.go:62] /Users/jenkins/minikube-integration/19648-1650/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-515000" cluster setting kubeconfig missing "stopped-upgrade-515000" context setting]
	I0915 11:46:52.718228    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:46:52.718727    5437 kapi.go:59] client config for stopped-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104435800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:46:52.719055    5437 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 11:46:52.722133    5437 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-515000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0915 11:46:52.722139    5437 kubeadm.go:1160] stopping kube-system containers ...
	I0915 11:46:52.722187    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 11:46:52.734296    5437 docker.go:483] Stopping containers: [3c2c62219606 430d8ca67bc4 66a874cf4b12 c1d50cfb639e 65c77278924b a674ca46f50d 14151d79a4b7 40d74a81f121]
	I0915 11:46:52.734373    5437 ssh_runner.go:195] Run: docker stop 3c2c62219606 430d8ca67bc4 66a874cf4b12 c1d50cfb639e 65c77278924b a674ca46f50d 14151d79a4b7 40d74a81f121
	I0915 11:46:52.745213    5437 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0915 11:46:52.750885    5437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:46:52.753897    5437 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:46:52.753903    5437 kubeadm.go:157] found existing configuration files:
	
	I0915 11:46:52.753927    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf
	I0915 11:46:52.756611    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:46:52.756641    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:46:52.759375    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf
	I0915 11:46:52.762204    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:46:52.762230    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:46:52.764737    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf
	I0915 11:46:52.767300    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:46:52.767324    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:46:52.770584    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf
	I0915 11:46:52.773252    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:46:52.773282    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:46:52.775694    5437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:46:52.778733    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:52.803026    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.211172    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.325265    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.345718    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0915 11:46:53.372610    5437 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:46:53.372699    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:53.873890    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:54.374759    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:46:54.379615    5437 api_server.go:72] duration metric: took 1.007018875s to wait for apiserver process to appear ...
	I0915 11:46:54.379624    5437 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:46:54.379637    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:46:59.381692    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:46:59.381760    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:04.382235    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:04.382355    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:09.383250    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:09.383292    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:14.384557    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:14.384658    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:19.385993    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:19.386015    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:24.387402    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:24.387440    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:29.389345    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:29.389424    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:34.391877    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:34.391987    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:39.393367    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:39.393398    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:44.395134    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:44.395233    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:49.397764    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:49.397811    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:47:54.400186    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:47:54.400453    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:47:54.421437    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:47:54.421573    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:47:54.436076    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:47:54.436173    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:47:54.448418    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:47:54.448503    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:47:54.459250    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:47:54.459334    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:47:54.470074    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:47:54.470152    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:47:54.484894    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:47:54.484970    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:47:54.494895    5437 logs.go:276] 0 containers: []
	W0915 11:47:54.494910    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:47:54.494982    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:47:54.505408    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:47:54.505425    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:47:54.505429    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:47:54.521174    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:47:54.521183    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:47:54.532949    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:47:54.532957    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:47:54.574175    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:47:54.574188    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:47:54.656220    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:47:54.656233    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:47:54.670651    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:47:54.670666    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:47:54.713689    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:47:54.713702    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:47:54.728670    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:47:54.728681    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:47:54.739987    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:47:54.739999    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:47:54.755764    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:47:54.755778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:47:54.769640    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:47:54.769649    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:47:54.781346    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:47:54.781355    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:47:54.798933    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:47:54.798949    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:47:54.824527    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:47:54.824540    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:47:54.828815    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:47:54.828822    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:47:54.843113    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:47:54.843126    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:47:57.360353    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:02.362658    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:02.363195    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:02.408505    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:02.408648    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:02.429614    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:02.429705    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:02.443901    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:02.443988    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:02.456298    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:02.456394    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:02.466945    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:02.467024    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:02.477329    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:02.477420    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:02.487622    5437 logs.go:276] 0 containers: []
	W0915 11:48:02.487633    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:02.487700    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:02.498652    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:02.498669    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:02.498674    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:02.503292    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:02.503299    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:02.540149    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:02.540159    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:02.555144    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:02.555158    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:02.567159    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:02.567172    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:02.582345    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:02.582360    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:02.593661    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:02.593674    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:02.609708    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:02.609724    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:02.624350    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:02.624367    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:02.649434    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:02.649443    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:02.688650    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:02.688661    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:02.703497    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:02.703510    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:02.715769    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:02.715779    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:02.727840    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:02.727854    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:02.767013    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:02.767023    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:02.784987    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:02.784997    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:05.300418    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:10.301447    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:10.301730    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:10.323950    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:10.324067    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:10.339846    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:10.339937    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:10.352488    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:10.352582    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:10.363569    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:10.363657    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:10.374141    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:10.374220    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:10.388441    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:10.388523    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:10.398614    5437 logs.go:276] 0 containers: []
	W0915 11:48:10.398629    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:10.398698    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:10.409057    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:10.409073    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:10.409078    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:10.449462    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:10.449478    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:10.464408    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:10.464417    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:10.475977    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:10.475988    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:10.513326    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:10.513336    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:10.549655    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:10.549667    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:10.562170    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:10.562188    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:10.576021    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:10.576035    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:10.589748    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:10.589761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:10.603835    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:10.603846    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:10.618322    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:10.618336    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:10.633391    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:10.633402    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:10.637670    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:10.637682    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:10.650584    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:10.650595    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:10.663097    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:10.663108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:10.680548    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:10.680559    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:13.206618    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:18.208898    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:18.209109    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:18.221634    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:18.221712    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:18.237891    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:18.237977    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:18.248566    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:18.248655    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:18.259019    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:18.259113    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:18.269115    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:18.269198    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:18.279412    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:18.279487    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:18.289798    5437 logs.go:276] 0 containers: []
	W0915 11:48:18.289810    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:18.289887    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:18.300164    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:18.300180    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:18.300186    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:18.326502    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:18.326514    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:18.339402    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:18.339417    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:18.376018    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:18.376032    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:18.390139    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:18.390152    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:18.403154    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:18.403164    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:18.419452    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:18.419466    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:18.457963    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:18.457976    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:18.495364    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:18.495374    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:18.506257    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:18.506267    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:18.521633    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:18.521642    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:18.525837    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:18.525843    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:18.539991    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:18.540005    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:18.551502    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:18.551512    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:18.565534    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:18.565544    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:18.579557    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:18.579569    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:21.101842    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:26.104082    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:26.104316    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:26.127123    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:26.127231    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:26.141008    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:26.141102    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:26.155972    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:26.156059    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:26.169046    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:26.169139    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:26.179532    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:26.179616    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:26.189860    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:26.189943    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:26.200095    5437 logs.go:276] 0 containers: []
	W0915 11:48:26.200105    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:26.200177    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:26.210883    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:26.210903    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:26.210908    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:26.222843    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:26.222854    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:26.259977    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:26.259985    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:26.274483    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:26.274493    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:26.285392    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:26.285405    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:26.299340    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:26.299349    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:26.311830    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:26.311842    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:26.323554    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:26.323565    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:26.328324    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:26.328330    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:26.343061    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:26.343072    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:26.380913    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:26.380925    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:26.394909    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:26.394921    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:26.412688    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:26.412699    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:26.448675    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:26.448689    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:26.463393    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:26.463403    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:26.488718    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:26.488726    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:29.005671    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:34.007950    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:34.008139    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:34.028547    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:34.028644    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:34.042948    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:34.043037    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:34.054209    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:34.054287    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:34.065268    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:34.065354    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:34.075473    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:34.075555    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:34.089865    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:34.089949    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:34.106205    5437 logs.go:276] 0 containers: []
	W0915 11:48:34.106218    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:34.106292    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:34.116363    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:34.116382    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:34.116387    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:34.133958    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:34.133969    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:34.145764    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:34.145774    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:34.157475    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:34.157487    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:34.173474    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:34.173483    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:34.211237    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:34.211247    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:34.225842    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:34.225853    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:34.239490    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:34.239499    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:34.256786    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:34.256796    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:34.268126    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:34.268137    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:34.281075    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:34.281090    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:34.321666    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:34.321676    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:34.355639    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:34.355651    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:34.370346    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:34.370355    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:34.394843    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:34.394855    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:34.399068    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:34.399073    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:36.916473    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:41.918720    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:41.918900    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:41.935576    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:41.935682    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:41.948398    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:41.948482    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:41.963965    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:41.964037    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:41.974588    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:41.974676    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:41.985295    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:41.985381    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:41.995792    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:41.995875    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:42.007643    5437 logs.go:276] 0 containers: []
	W0915 11:48:42.007659    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:42.007725    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:42.018392    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:42.018409    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:42.018415    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:42.032313    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:42.032322    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:42.050178    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:42.050187    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:42.088852    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:42.088863    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:42.124387    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:42.124402    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:42.146176    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:42.146186    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:42.184750    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:42.184761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:42.198957    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:42.198973    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:42.210628    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:42.210638    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:42.236154    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:42.236169    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:42.252213    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:42.252223    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:42.266509    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:42.266518    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:42.281892    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:42.281909    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:42.293179    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:42.293188    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:42.305040    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:42.305051    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:42.316462    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:42.316475    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:44.821951    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:49.824354    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:49.824852    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:49.858464    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:49.858618    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:49.877220    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:49.877324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:49.894148    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:49.894227    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:49.905232    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:49.905324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:49.916129    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:49.916208    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:49.927305    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:49.927390    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:49.942098    5437 logs.go:276] 0 containers: []
	W0915 11:48:49.942108    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:49.942184    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:49.953181    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:49.953200    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:49.953206    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:49.967407    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:49.967416    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:49.978975    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:49.978986    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:49.990694    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:49.990704    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:50.008414    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:50.008423    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:50.046791    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:50.046802    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:50.059189    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:50.059203    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:50.095110    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:50.095121    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:50.119904    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:50.119914    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:50.157194    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:50.157203    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:50.171827    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:50.171837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:50.185802    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:50.185813    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:50.197372    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:50.197383    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:48:50.215992    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:50.216001    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:50.234213    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:50.234224    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:50.246719    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:50.246727    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:52.753161    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:48:57.755382    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:48:57.755562    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:48:57.767553    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:48:57.767646    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:48:57.778445    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:48:57.778525    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:48:57.789174    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:48:57.789255    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:48:57.799772    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:48:57.799847    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:48:57.815094    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:48:57.815171    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:48:57.826349    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:48:57.826428    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:48:57.836472    5437 logs.go:276] 0 containers: []
	W0915 11:48:57.836482    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:48:57.836537    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:48:57.847389    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:48:57.847406    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:48:57.847411    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:48:57.861873    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:48:57.861884    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:48:57.877696    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:48:57.877711    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:48:57.891871    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:48:57.891885    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:48:57.904634    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:48:57.904646    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:48:57.916280    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:48:57.916292    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:48:57.931659    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:48:57.931674    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:48:57.957331    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:48:57.957347    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:48:57.961842    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:48:57.961849    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:48:57.999050    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:48:57.999063    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:48:58.015174    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:48:58.015186    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:48:58.030197    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:48:58.030206    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:48:58.066716    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:48:58.066726    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:48:58.083392    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:48:58.083404    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:48:58.125072    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:48:58.125083    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:48:58.140978    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:48:58.140990    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:00.668196    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:05.669034    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:05.669167    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:05.683222    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:05.683323    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:05.694748    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:05.694849    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:05.705932    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:05.706012    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:05.716711    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:05.716798    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:05.727909    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:05.727992    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:05.739389    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:05.739461    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:05.749435    5437 logs.go:276] 0 containers: []
	W0915 11:49:05.749449    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:05.749507    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:05.759770    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:05.759788    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:05.759793    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:05.771443    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:05.771453    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:05.783172    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:05.783181    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:05.797999    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:05.798009    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:05.809741    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:05.809751    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:05.823346    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:05.823360    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:05.848075    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:05.848093    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:05.888381    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:05.888394    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:05.893316    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:05.893326    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:05.905986    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:05.906002    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:05.921531    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:05.921539    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:05.937109    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:05.937121    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:05.976295    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:05.976311    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:05.993160    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:05.993176    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:06.019042    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:06.019056    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:06.059865    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:06.059876    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:08.576578    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:13.578848    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:13.579055    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:13.594384    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:13.594479    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:13.606619    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:13.606711    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:13.617229    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:13.617305    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:13.627883    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:13.627962    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:13.638607    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:13.638684    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:13.649005    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:13.649077    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:13.662953    5437 logs.go:276] 0 containers: []
	W0915 11:49:13.662965    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:13.663036    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:13.673526    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:13.673541    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:13.673546    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:13.688999    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:13.689009    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:13.706305    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:13.706320    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:13.751211    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:13.751221    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:13.764560    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:13.764571    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:13.769409    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:13.769419    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:13.781547    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:13.781559    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:13.794585    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:13.794599    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:13.814771    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:13.814788    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:13.829753    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:13.829767    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:13.842322    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:13.842337    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:13.860694    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:13.860707    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:13.902715    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:13.902731    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:13.942975    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:13.942985    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:13.962364    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:13.962375    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:13.988539    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:13.988552    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:16.507149    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:21.509397    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:21.509718    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:21.537122    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:21.537263    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:21.554070    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:21.554172    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:21.566902    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:21.566991    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:21.580134    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:21.580225    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:21.594887    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:21.594971    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:21.612294    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:21.612350    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:21.623549    5437 logs.go:276] 0 containers: []
	W0915 11:49:21.623557    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:21.623603    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:21.634950    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:21.634963    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:21.634969    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:21.651747    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:21.651763    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:21.656830    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:21.656837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:21.672405    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:21.672415    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:21.686759    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:21.686769    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:21.699945    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:21.699956    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:21.739749    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:21.739762    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:21.779327    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:21.779348    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:21.791259    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:21.791271    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:21.804051    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:21.804065    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:21.819549    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:21.819561    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:21.844359    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:21.844371    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:21.884510    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:21.884526    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:21.897097    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:21.897108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:21.916412    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:21.916429    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:21.928863    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:21.928874    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:24.444715    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:29.446825    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:29.446949    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:29.461727    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:29.461819    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:29.474625    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:29.474707    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:29.491143    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:29.491226    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:29.507816    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:29.507889    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:29.526355    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:29.526439    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:29.537743    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:29.537837    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:29.548390    5437 logs.go:276] 0 containers: []
	W0915 11:49:29.548403    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:29.548475    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:29.559158    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:29.559176    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:29.559181    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:29.598843    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:29.598860    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:29.611502    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:29.611520    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:29.623756    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:29.623768    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:29.642466    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:29.642478    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:29.657382    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:29.657400    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:29.672939    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:29.672950    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:29.677729    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:29.677739    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:29.693080    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:29.693092    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:29.708672    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:29.708687    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:29.722104    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:29.722118    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:29.735450    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:29.735462    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:29.760552    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:29.760565    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:29.773260    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:29.773271    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:29.809209    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:29.809220    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:29.846748    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:29.846761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:32.362496    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:37.364610    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:37.364721    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:37.376906    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:37.377017    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:37.388523    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:37.388614    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:37.400229    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:37.400274    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:37.411350    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:37.411432    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:37.422982    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:37.423067    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:37.434706    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:37.434787    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:37.446049    5437 logs.go:276] 0 containers: []
	W0915 11:49:37.446063    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:37.446143    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:37.458195    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:37.458215    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:37.458221    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:37.472828    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:37.472844    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:37.485919    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:37.485932    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:37.499421    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:37.499434    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:37.539136    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:37.539149    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:37.556167    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:37.556175    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:37.569946    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:37.569959    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:37.583734    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:37.583746    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:37.596948    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:37.596962    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:37.643313    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:37.643326    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:37.664623    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:37.664637    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:37.670460    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:37.670472    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:37.686230    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:37.686244    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:37.702014    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:37.702025    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:37.717387    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:37.717401    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:37.743796    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:37.743812    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:40.284297    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:45.289489    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:45.289570    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:45.300840    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:45.300925    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:45.312949    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:45.313039    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:45.324653    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:45.324748    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:45.336735    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:45.336819    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:45.350335    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:45.350425    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:45.361512    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:45.361604    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:45.374441    5437 logs.go:276] 0 containers: []
	W0915 11:49:45.374453    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:45.374532    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:45.389638    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:45.389656    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:45.389663    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:45.401970    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:45.401983    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:45.416599    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:45.416613    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:45.429751    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:45.429766    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:45.442936    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:45.442948    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:45.483595    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:45.483611    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:45.488552    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:45.488564    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:45.504525    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:45.504535    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:45.546025    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:45.546039    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:45.561534    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:45.561543    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:45.578128    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:45.578144    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:45.595710    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:45.595727    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:45.616822    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:45.616831    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:45.651471    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:45.651481    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:45.671566    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:45.671576    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:45.696387    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:45.696397    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:48.213191    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:49:53.219273    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:49:53.219356    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:49:53.231932    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:49:53.232014    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:49:53.244361    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:49:53.244447    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:49:53.256651    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:49:53.256734    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:49:53.268547    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:49:53.268626    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:49:53.280545    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:49:53.280631    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:49:53.294588    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:49:53.294673    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:49:53.305965    5437 logs.go:276] 0 containers: []
	W0915 11:49:53.305975    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:49:53.306046    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:49:53.317248    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:49:53.317266    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:49:53.317271    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:49:53.361565    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:49:53.361587    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:49:53.374440    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:49:53.374457    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:49:53.388954    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:49:53.388964    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:49:53.393286    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:49:53.393295    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:49:53.432683    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:49:53.432699    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:49:53.447921    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:49:53.447932    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:49:53.472159    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:49:53.472173    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:49:53.491765    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:49:53.491779    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:49:53.515367    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:49:53.515376    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:49:53.530374    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:49:53.530386    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:49:53.546883    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:49:53.546894    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:49:53.559178    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:49:53.559194    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:49:53.573772    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:49:53.573785    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:49:53.585595    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:49:53.585605    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:49:53.599657    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:49:53.599672    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:49:56.140865    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:01.145380    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:01.145499    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:01.157038    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:01.157128    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:01.168275    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:01.168366    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:01.180420    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:01.180509    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:01.191361    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:01.191454    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:01.203740    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:01.203823    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:01.215706    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:01.215797    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:01.227662    5437 logs.go:276] 0 containers: []
	W0915 11:50:01.227674    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:01.227749    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:01.239148    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:01.239172    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:01.239177    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:01.275443    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:01.275451    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:01.290832    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:01.290841    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:01.303604    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:01.303613    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:01.318521    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:01.318532    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:01.331065    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:01.331074    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:01.371086    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:01.371107    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:01.375957    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:01.375971    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:01.417302    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:01.417317    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:01.437566    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:01.437578    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:01.449294    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:01.449305    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:01.464059    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:01.464068    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:01.476248    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:01.476260    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:01.493501    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:01.493511    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:01.515798    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:01.515804    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:01.530525    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:01.530536    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:04.049652    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:09.053236    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:09.053336    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:09.065235    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:09.065324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:09.076627    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:09.076711    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:09.088216    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:09.088299    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:09.099698    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:09.099784    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:09.111101    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:09.111187    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:09.124400    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:09.124524    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:09.135440    5437 logs.go:276] 0 containers: []
	W0915 11:50:09.135453    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:09.135525    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:09.146964    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:09.146981    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:09.146987    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:09.160921    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:09.160929    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:09.185688    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:09.185705    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:09.190711    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:09.190724    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:09.227839    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:09.227854    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:09.248295    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:09.248320    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:09.288113    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:09.288128    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:09.303134    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:09.303145    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:09.314826    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:09.314837    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:09.329482    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:09.329496    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:09.341416    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:09.341426    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:09.355220    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:09.355229    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:09.367309    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:09.367320    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:09.384749    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:09.384760    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:09.397452    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:09.397461    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:09.411499    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:09.411509    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:11.951167    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:16.953453    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:16.953569    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:16.964870    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:16.964967    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:16.977857    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:16.977942    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:16.989043    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:16.989126    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:17.000935    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:17.001025    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:17.012259    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:17.012373    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:17.023276    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:17.023357    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:17.034878    5437 logs.go:276] 0 containers: []
	W0915 11:50:17.034889    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:17.034967    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:17.045928    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:17.045946    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:17.045952    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:17.069082    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:17.069092    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:17.089788    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:17.089805    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:17.107051    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:17.107064    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:17.126000    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:17.126010    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:17.138779    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:17.138791    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:17.151466    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:17.151478    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:17.191134    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:17.191145    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:17.203363    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:17.203372    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:17.241499    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:17.241511    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:17.256453    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:17.256464    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:17.267685    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:17.267695    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:17.281565    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:17.281576    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:17.316108    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:17.316120    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:17.330211    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:17.330222    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:17.334731    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:17.334738    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:19.851618    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:24.854294    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:24.854395    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:24.869940    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:24.870028    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:24.880865    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:24.880954    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:24.892847    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:24.892935    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:24.904002    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:24.904084    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:24.915896    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:24.915985    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:24.927447    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:24.927535    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:24.941962    5437 logs.go:276] 0 containers: []
	W0915 11:50:24.941973    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:24.942050    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:24.953217    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:24.953236    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:24.953242    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:24.995259    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:24.995281    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:25.009936    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:25.009945    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:25.021924    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:25.021934    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:25.039591    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:25.039601    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:25.051259    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:25.051269    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:25.090390    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:25.090399    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:25.094587    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:25.094595    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:25.112781    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:25.112792    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:25.137305    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:25.137314    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:25.175674    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:25.175690    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:25.192516    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:25.192527    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:25.207955    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:25.207965    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:25.219610    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:25.219620    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:25.233806    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:25.233819    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:25.248276    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:25.248285    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:27.768294    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:32.770800    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:32.770897    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:32.785551    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:32.785634    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:32.797443    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:32.797530    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:32.809322    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:32.809412    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:32.820979    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:32.821062    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:32.832662    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:32.832742    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:32.844414    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:32.844503    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:32.855720    5437 logs.go:276] 0 containers: []
	W0915 11:50:32.855732    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:32.855804    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:32.875753    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:32.875772    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:32.875778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:32.916568    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:32.916582    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:32.931213    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:32.931223    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:32.943294    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:32.943303    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:32.954831    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:32.954842    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:32.968963    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:32.968976    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:32.992506    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:32.992514    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:33.031355    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:33.031366    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:33.046314    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:33.046328    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:33.060098    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:33.060109    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:33.071483    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:33.071497    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:33.089584    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:33.089597    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:33.103662    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:33.103671    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:33.107986    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:33.107991    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:33.146583    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:33.146594    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:33.158528    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:33.158537    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:35.670671    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:40.671756    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:40.671873    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:40.683350    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:40.683441    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:40.695670    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:40.695760    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:40.711362    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:40.711451    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:40.723080    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:40.723176    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:40.734503    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:40.734600    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:40.749507    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:40.749600    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:40.760909    5437 logs.go:276] 0 containers: []
	W0915 11:50:40.760922    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:40.760999    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:40.774095    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:40.774114    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:40.774119    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:40.813695    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:40.813706    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:40.827876    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:40.827885    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:40.841589    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:40.841599    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:40.862330    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:40.862340    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:40.866473    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:40.866479    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:40.880952    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:40.880964    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:40.906124    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:40.906140    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:40.929241    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:40.929261    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:40.965732    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:40.965746    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:40.977155    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:40.977166    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:41.014818    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:41.014830    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:41.026186    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:41.026196    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:41.038378    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:41.038389    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:41.054291    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:41.054305    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:41.068919    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:41.068930    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:43.583206    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:48.585546    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:48.585631    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:50:48.597250    5437 logs.go:276] 2 containers: [de4a32256d20 65c77278924b]
	I0915 11:50:48.597337    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:50:48.608380    5437 logs.go:276] 2 containers: [b75685755549 c1d50cfb639e]
	I0915 11:50:48.608472    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:50:48.622196    5437 logs.go:276] 1 containers: [ec0eabd08131]
	I0915 11:50:48.622283    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:50:48.636650    5437 logs.go:276] 2 containers: [527b2ea24373 3c2c62219606]
	I0915 11:50:48.636746    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:50:48.648097    5437 logs.go:276] 1 containers: [8816c52e8944]
	I0915 11:50:48.648185    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:50:48.659884    5437 logs.go:276] 2 containers: [ac36e26f2643 66a874cf4b12]
	I0915 11:50:48.659970    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:50:48.670676    5437 logs.go:276] 0 containers: []
	W0915 11:50:48.670691    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:50:48.670769    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:50:48.683549    5437 logs.go:276] 1 containers: [5934f0ed6866]
	I0915 11:50:48.683566    5437 logs.go:123] Gathering logs for kube-proxy [8816c52e8944] ...
	I0915 11:50:48.683573    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8816c52e8944"
	I0915 11:50:48.695964    5437 logs.go:123] Gathering logs for kube-controller-manager [ac36e26f2643] ...
	I0915 11:50:48.695979    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac36e26f2643"
	I0915 11:50:48.713359    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:50:48.713368    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:50:48.752937    5437 logs.go:123] Gathering logs for kube-apiserver [de4a32256d20] ...
	I0915 11:50:48.752946    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de4a32256d20"
	I0915 11:50:48.767947    5437 logs.go:123] Gathering logs for coredns [ec0eabd08131] ...
	I0915 11:50:48.767962    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec0eabd08131"
	I0915 11:50:48.779587    5437 logs.go:123] Gathering logs for storage-provisioner [5934f0ed6866] ...
	I0915 11:50:48.779601    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5934f0ed6866"
	I0915 11:50:48.791055    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:50:48.791066    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:50:48.802340    5437 logs.go:123] Gathering logs for etcd [c1d50cfb639e] ...
	I0915 11:50:48.802348    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1d50cfb639e"
	I0915 11:50:48.818332    5437 logs.go:123] Gathering logs for kube-scheduler [527b2ea24373] ...
	I0915 11:50:48.818347    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 527b2ea24373"
	I0915 11:50:48.834747    5437 logs.go:123] Gathering logs for kube-scheduler [3c2c62219606] ...
	I0915 11:50:48.834757    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c2c62219606"
	I0915 11:50:48.846397    5437 logs.go:123] Gathering logs for kube-controller-manager [66a874cf4b12] ...
	I0915 11:50:48.846409    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66a874cf4b12"
	I0915 11:50:48.860716    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:50:48.860726    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:50:48.865346    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:50:48.865353    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:50:48.900768    5437 logs.go:123] Gathering logs for kube-apiserver [65c77278924b] ...
	I0915 11:50:48.900778    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 65c77278924b"
	I0915 11:50:48.937984    5437 logs.go:123] Gathering logs for etcd [b75685755549] ...
	I0915 11:50:48.937994    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b75685755549"
	I0915 11:50:48.952066    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:50:48.952079    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:50:51.476724    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:50:56.479097    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:50:56.479130    5437 kubeadm.go:597] duration metric: took 4m3.747669416s to restartPrimaryControlPlane
	W0915 11:50:56.479165    5437 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0915 11:50:56.479177    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0915 11:50:57.453461    5437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 11:50:57.458675    5437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 11:50:57.461716    5437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 11:50:57.464539    5437 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 11:50:57.464545    5437 kubeadm.go:157] found existing configuration files:
	
	I0915 11:50:57.464576    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf
	I0915 11:50:57.467083    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 11:50:57.467117    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 11:50:57.469743    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf
	I0915 11:50:57.472639    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 11:50:57.472665    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 11:50:57.475395    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf
	I0915 11:50:57.477853    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 11:50:57.477877    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 11:50:57.480756    5437 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf
	I0915 11:50:57.483306    5437 kubeadm.go:163] "https://control-plane.minikube.internal:50549" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50549 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 11:50:57.483329    5437 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 11:50:57.485964    5437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0915 11:50:57.502833    5437 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0915 11:50:57.502919    5437 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 11:50:57.550307    5437 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 11:50:57.550438    5437 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 11:50:57.550491    5437 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0915 11:50:57.609805    5437 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 11:50:57.614599    5437 out.go:235]   - Generating certificates and keys ...
	I0915 11:50:57.614636    5437 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 11:50:57.614664    5437 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 11:50:57.614702    5437 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0915 11:50:57.614732    5437 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0915 11:50:57.614772    5437 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0915 11:50:57.614798    5437 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0915 11:50:57.614831    5437 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0915 11:50:57.614963    5437 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0915 11:50:57.615037    5437 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0915 11:50:57.615108    5437 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0915 11:50:57.615128    5437 kubeadm.go:310] [certs] Using the existing "sa" key
	I0915 11:50:57.615160    5437 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 11:50:57.746207    5437 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 11:50:57.950659    5437 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 11:50:58.196950    5437 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 11:50:58.408678    5437 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 11:50:58.438519    5437 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 11:50:58.438903    5437 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 11:50:58.438960    5437 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 11:50:58.516244    5437 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 11:50:58.520419    5437 out.go:235]   - Booting up control plane ...
	I0915 11:50:58.520461    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 11:50:58.520500    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 11:50:58.520541    5437 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 11:50:58.520584    5437 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 11:50:58.520684    5437 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0915 11:51:03.017216    5437 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501306 seconds
	I0915 11:51:03.017313    5437 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 11:51:03.021424    5437 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 11:51:03.544249    5437 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 11:51:03.544571    5437 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-515000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 11:51:04.048256    5437 kubeadm.go:310] [bootstrap-token] Using token: 19ou2y.372pn0rn1zo0hpgd
	I0915 11:51:04.054170    5437 out.go:235]   - Configuring RBAC rules ...
	I0915 11:51:04.054235    5437 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 11:51:04.054289    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 11:51:04.062746    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 11:51:04.064068    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 11:51:04.064833    5437 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 11:51:04.066181    5437 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 11:51:04.070265    5437 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 11:51:04.250145    5437 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 11:51:04.452080    5437 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 11:51:04.452652    5437 kubeadm.go:310] 
	I0915 11:51:04.452684    5437 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 11:51:04.452689    5437 kubeadm.go:310] 
	I0915 11:51:04.452774    5437 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 11:51:04.452780    5437 kubeadm.go:310] 
	I0915 11:51:04.452792    5437 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 11:51:04.452837    5437 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 11:51:04.452867    5437 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 11:51:04.452870    5437 kubeadm.go:310] 
	I0915 11:51:04.452900    5437 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 11:51:04.452904    5437 kubeadm.go:310] 
	I0915 11:51:04.452935    5437 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 11:51:04.452939    5437 kubeadm.go:310] 
	I0915 11:51:04.452997    5437 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 11:51:04.453032    5437 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 11:51:04.453088    5437 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 11:51:04.453132    5437 kubeadm.go:310] 
	I0915 11:51:04.453207    5437 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 11:51:04.453291    5437 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 11:51:04.453295    5437 kubeadm.go:310] 
	I0915 11:51:04.453340    5437 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 19ou2y.372pn0rn1zo0hpgd \
	I0915 11:51:04.453455    5437 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd \
	I0915 11:51:04.453473    5437 kubeadm.go:310] 	--control-plane 
	I0915 11:51:04.453478    5437 kubeadm.go:310] 
	I0915 11:51:04.453596    5437 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 11:51:04.453601    5437 kubeadm.go:310] 
	I0915 11:51:04.453648    5437 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 19ou2y.372pn0rn1zo0hpgd \
	I0915 11:51:04.453815    5437 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:976f35c11eaace633187d11e180e90834474249d2876b2faadddb8c25ff439dd 
	I0915 11:51:04.453907    5437 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 11:51:04.453913    5437 cni.go:84] Creating CNI manager for ""
	I0915 11:51:04.453922    5437 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:51:04.457392    5437 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 11:51:04.465400    5437 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 11:51:04.468527    5437 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 11:51:04.473483    5437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 11:51:04.473559    5437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 11:51:04.473651    5437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-515000 minikube.k8s.io/updated_at=2024_09_15T11_51_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=6b3e75bb13951e1aa9da4105a14c95c8da7f2673 minikube.k8s.io/name=stopped-upgrade-515000 minikube.k8s.io/primary=true
	I0915 11:51:04.478719    5437 ops.go:34] apiserver oom_adj: -16
	I0915 11:51:04.510378    5437 kubeadm.go:1113] duration metric: took 36.859333ms to wait for elevateKubeSystemPrivileges
	I0915 11:51:04.515356    5437 kubeadm.go:394] duration metric: took 4m11.797980792s to StartCluster
	I0915 11:51:04.515372    5437 settings.go:142] acquiring lock: {Name:mke41fab1fd2ef0229fde23400affd11462eeb5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:51:04.515462    5437 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:51:04.515916    5437 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/kubeconfig: {Name:mk9e0a30ddabe493b890dd5df7bd6be2bae61f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:51:04.516145    5437 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:51:04.516155    5437 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 11:51:04.516224    5437 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-515000"
	I0915 11:51:04.516226    5437 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:51:04.516233    5437 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-515000"
	W0915 11:51:04.516236    5437 addons.go:243] addon storage-provisioner should already be in state true
	I0915 11:51:04.516250    5437 host.go:66] Checking if "stopped-upgrade-515000" exists ...
	I0915 11:51:04.516257    5437 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-515000"
	I0915 11:51:04.516266    5437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-515000"
	I0915 11:51:04.517157    5437 kapi.go:59] client config for stopped-upgrade-515000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/stopped-upgrade-515000/client.key", CAFile:"/Users/jenkins/minikube-integration/19648-1650/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104435800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 11:51:04.517273    5437 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-515000"
	W0915 11:51:04.517278    5437 addons.go:243] addon default-storageclass should already be in state true
	I0915 11:51:04.517284    5437 host.go:66] Checking if "stopped-upgrade-515000" exists ...
	I0915 11:51:04.520405    5437 out.go:177] * Verifying Kubernetes components...
	I0915 11:51:04.520747    5437 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 11:51:04.523446    5437 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 11:51:04.523454    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:51:04.527324    5437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 11:51:04.531357    5437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 11:51:04.535357    5437 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:51:04.535364    5437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 11:51:04.535370    5437 sshutil.go:53] new ssh client: &{IP:localhost Port:50515 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/stopped-upgrade-515000/id_rsa Username:docker}
	I0915 11:51:04.601110    5437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 11:51:04.606380    5437 api_server.go:52] waiting for apiserver process to appear ...
	I0915 11:51:04.606432    5437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 11:51:04.610882    5437 api_server.go:72] duration metric: took 94.726208ms to wait for apiserver process to appear ...
	I0915 11:51:04.610891    5437 api_server.go:88] waiting for apiserver healthz status ...
	I0915 11:51:04.610897    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:04.630981    5437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 11:51:04.673366    5437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 11:51:04.994538    5437 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 11:51:04.994550    5437 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 11:51:09.613064    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:09.613136    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:14.613651    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:14.613704    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:19.614072    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:19.614115    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:24.614685    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:24.614709    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:29.615382    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:29.615434    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:34.616327    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:34.616366    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0915 11:51:34.996787    5437 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0915 11:51:35.000031    5437 out.go:177] * Enabled addons: storage-provisioner
	I0915 11:51:35.011949    5437 addons.go:510] duration metric: took 30.495936834s for enable addons: enabled=[storage-provisioner]
	I0915 11:51:39.617506    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:39.617562    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:44.619116    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:44.619140    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:49.620762    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:49.620799    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:54.621161    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:54.621189    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:51:59.623356    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:51:59.623377    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:04.625539    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:04.625774    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:04.653539    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:04.653653    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:04.673070    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:04.673154    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:04.684817    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:04.684901    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:04.695685    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:04.695764    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:04.709406    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:04.709479    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:04.719423    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:04.719486    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:04.729660    5437 logs.go:276] 0 containers: []
	W0915 11:52:04.729672    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:04.729743    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:04.740035    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:04.740053    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:04.740061    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:04.755097    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:04.755108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:04.782316    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:04.782328    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:04.820052    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:04.820063    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:04.834163    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:04.834174    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:04.848129    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:04.848140    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:04.859894    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:04.859905    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:04.871776    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:04.871790    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:04.883493    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:04.883505    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:04.895394    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:04.895410    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:04.920248    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:04.920255    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:04.924316    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:04.924322    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:04.958320    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:04.958333    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:07.472550    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:12.475325    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:12.475870    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:12.508869    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:12.509015    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:12.528351    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:12.528470    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:12.542527    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:12.542616    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:12.554427    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:12.554503    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:12.565176    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:12.565261    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:12.576163    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:12.576233    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:12.586654    5437 logs.go:276] 0 containers: []
	W0915 11:52:12.586664    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:12.586729    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:12.596855    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:12.596869    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:12.596875    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:12.608720    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:12.608734    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:12.623303    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:12.623313    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:12.635261    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:12.635274    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:12.647159    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:12.647173    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:12.661186    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:12.661198    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:12.676370    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:12.676380    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:12.710443    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:12.710452    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:12.729594    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:12.729605    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:12.749944    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:12.749953    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:12.774779    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:12.774790    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:12.788235    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:12.788250    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:12.826296    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:12.826309    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:15.332778    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:20.334531    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:20.334805    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:20.366251    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:20.366370    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:20.382879    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:20.382976    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:20.395236    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:20.395324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:20.408416    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:20.408497    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:20.418805    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:20.418886    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:20.434060    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:20.434143    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:20.445079    5437 logs.go:276] 0 containers: []
	W0915 11:52:20.445091    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:20.445158    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:20.455458    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:20.455474    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:20.455479    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:20.466901    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:20.466914    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:20.482313    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:20.482326    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:20.501247    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:20.501260    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:20.516494    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:20.516504    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:20.550350    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:20.550362    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:20.564836    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:20.564847    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:20.578145    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:20.578155    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:20.595129    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:20.595141    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:20.613423    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:20.613432    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:20.637606    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:20.637613    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:20.675231    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:20.675236    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:20.679810    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:20.679819    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:23.193359    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:28.196166    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:28.196747    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:28.236915    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:28.237074    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:28.258054    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:28.258173    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:28.272999    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:28.273084    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:28.289244    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:28.289321    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:28.300168    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:28.300249    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:28.313063    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:28.313135    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:28.323259    5437 logs.go:276] 0 containers: []
	W0915 11:52:28.323271    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:28.323346    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:28.336170    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:28.336188    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:28.336193    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:28.348292    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:28.348301    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:28.372230    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:28.372245    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:28.410683    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:28.410694    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:28.424566    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:28.424581    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:28.438144    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:28.438154    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:28.449665    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:28.449677    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:28.460976    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:28.460988    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:28.477430    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:28.477439    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:28.489022    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:28.489032    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:28.499991    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:28.500005    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:28.504332    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:28.504339    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:28.540071    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:28.540084    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:31.066224    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:36.068739    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:36.069324    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:36.108418    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:36.108576    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:36.129394    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:36.129491    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:36.144660    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:36.144756    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:36.156701    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:36.156788    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:36.167207    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:36.167280    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:36.177828    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:36.177911    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:36.187633    5437 logs.go:276] 0 containers: []
	W0915 11:52:36.187644    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:36.187705    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:36.197915    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:36.197935    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:36.197941    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:36.212349    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:36.212359    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:36.223588    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:36.223597    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:36.247224    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:36.247232    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:36.258370    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:36.258381    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:36.276186    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:36.276196    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:36.315721    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:36.315734    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:36.320279    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:36.320287    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:36.357084    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:36.357095    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:36.371127    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:36.371140    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:36.383775    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:36.383786    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:36.395392    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:36.395401    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:36.410362    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:36.410373    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:38.923593    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:43.925046    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:43.925531    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:43.964606    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:43.964762    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:43.986149    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:43.986288    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:44.000942    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:44.001031    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:44.013286    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:44.013371    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:44.023911    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:44.023994    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:44.034050    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:44.034127    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:44.043947    5437 logs.go:276] 0 containers: []
	W0915 11:52:44.043958    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:44.044030    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:44.054537    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:44.054554    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:44.054560    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:44.068914    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:44.068926    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:44.080862    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:44.080882    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:44.092393    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:44.092405    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:44.108475    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:44.108485    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:44.124236    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:44.124245    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:44.135532    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:44.135541    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:44.159220    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:44.159228    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:44.196264    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:44.196275    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:44.200530    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:44.200537    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:44.235626    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:44.235638    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:44.249371    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:44.249384    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:44.267016    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:44.267025    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:46.781723    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:51.784589    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:51.785175    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:51.826702    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:51.826855    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:51.847402    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:51.847530    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:51.862929    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:51.863012    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:51.875563    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:51.875648    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:51.886804    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:51.886876    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:51.897237    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:51.897318    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:51.907229    5437 logs.go:276] 0 containers: []
	W0915 11:52:51.907240    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:51.907311    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:51.917936    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:51.917952    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:51.917957    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:51.922101    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:51.922108    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:51.935992    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:51.936002    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:51.947611    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:51.947629    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:51.965482    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:51.965492    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:51.976959    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:51.976972    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:52.000102    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:52.000111    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:52.011667    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:52.011677    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:52.048433    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:52.048443    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:52.081893    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:52.081907    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:52:52.095825    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:52.095836    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:52.113636    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:52.113651    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:52.125141    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:52.125155    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:54.641907    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:52:59.644625    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:52:59.645094    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:52:59.679726    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:52:59.679877    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:52:59.698097    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:52:59.698205    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:52:59.711787    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:52:59.711879    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:52:59.728239    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:52:59.728317    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:52:59.738710    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:52:59.738796    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:52:59.749179    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:52:59.749260    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:52:59.759706    5437 logs.go:276] 0 containers: []
	W0915 11:52:59.759720    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:52:59.759788    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:52:59.770092    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:52:59.770107    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:52:59.770111    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:52:59.787004    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:52:59.787014    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:52:59.798874    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:52:59.798884    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:52:59.810495    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:52:59.810504    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:52:59.826049    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:52:59.826059    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:52:59.865315    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:52:59.865323    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:52:59.869459    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:52:59.869467    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:52:59.880848    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:52:59.880859    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:52:59.898825    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:52:59.898835    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:52:59.913662    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:52:59.913671    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:52:59.937085    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:52:59.937091    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:52:59.950203    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:52:59.950215    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:52:59.988002    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:52:59.988017    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:02.507471    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:07.509727    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:07.510034    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:07.534338    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:07.534473    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:07.550484    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:07.550578    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:07.563456    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:53:07.563538    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:07.574655    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:07.574734    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:07.585116    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:07.585197    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:07.595477    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:07.595554    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:07.608586    5437 logs.go:276] 0 containers: []
	W0915 11:53:07.608597    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:07.608663    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:07.618949    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:07.618965    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:07.618969    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:07.655701    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:07.655708    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:07.692193    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:07.692202    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:07.704158    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:07.704169    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:07.718776    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:07.718786    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:07.743414    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:07.743420    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:07.747996    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:07.748003    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:07.762337    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:07.762349    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:07.775799    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:07.775812    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:07.787448    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:07.787457    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:07.801995    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:07.802004    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:07.822826    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:07.822838    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:07.834127    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:07.834142    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:10.350150    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:15.351998    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:15.352514    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:15.385979    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:15.386135    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:15.407615    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:15.407723    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:15.422181    5437 logs.go:276] 2 containers: [e546ba6a48d0 f05f24c58255]
	I0915 11:53:15.422254    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:15.438574    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:15.438655    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:15.449092    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:15.449176    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:15.458847    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:15.458928    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:15.468398    5437 logs.go:276] 0 containers: []
	W0915 11:53:15.468412    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:15.468479    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:15.487683    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:15.487702    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:15.487707    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:15.515526    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:15.515541    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:15.538292    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:15.538309    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:15.576253    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:15.576268    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:15.592621    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:15.592632    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:15.629200    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:15.629209    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:15.633353    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:15.633360    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:15.680928    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:15.680943    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:15.713833    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:15.713852    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:15.748025    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:15.748046    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:15.777592    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:15.777606    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:15.789362    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:15.789374    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:15.804982    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:15.804992    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:18.318823    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:23.321124    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:23.321526    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:23.356394    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:23.356536    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:23.374803    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:23.374903    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:23.388140    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:53:23.388229    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:23.402700    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:23.402777    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:23.413291    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:23.413368    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:23.423756    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:23.423834    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:23.433723    5437 logs.go:276] 0 containers: []
	W0915 11:53:23.433733    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:23.433795    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:23.444075    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:23.444095    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:23.444101    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:23.458806    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:23.458817    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:23.483555    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:23.483565    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:23.495221    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:23.495234    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:23.499973    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:23.499982    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:23.535586    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:23.535598    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:23.556115    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:53:23.556127    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:53:23.567651    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:23.567842    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:23.585063    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:23.585077    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:23.599010    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:23.599027    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:23.610993    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:23.611003    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:23.622448    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:23.622458    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:23.633691    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:23.633700    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:23.670325    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:53:23.670334    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:53:23.681547    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:23.681558    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:26.195338    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:31.197920    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:31.198540    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:31.237833    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:31.238002    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:31.260018    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:31.260147    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:31.275273    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:53:31.275357    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:31.287405    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:31.287491    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:31.298336    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:31.298413    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:31.311680    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:31.311762    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:31.322133    5437 logs.go:276] 0 containers: []
	W0915 11:53:31.322144    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:31.322207    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:31.332662    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:31.332682    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:31.332687    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:31.351530    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:53:31.351539    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:53:31.363393    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:31.363402    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:31.402250    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:31.402259    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:31.414091    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:31.414100    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:31.428627    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:31.428637    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:31.442961    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:31.442972    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:31.454449    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:31.454458    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:31.465568    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:31.465581    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:31.483032    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:31.483042    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:31.494446    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:31.494457    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:31.519057    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:31.519068    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:31.531156    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:31.531166    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:31.565427    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:53:31.565438    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:53:31.585026    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:31.585039    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:34.089352    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:39.091616    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:39.092146    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:39.131624    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:39.131781    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:39.153502    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:39.153619    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:39.168527    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:53:39.168621    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:39.180954    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:39.181037    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:39.197877    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:39.197965    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:39.208295    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:39.208372    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:39.218924    5437 logs.go:276] 0 containers: []
	W0915 11:53:39.218935    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:39.219004    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:39.229345    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:39.229362    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:53:39.229367    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:53:39.240681    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:39.240690    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:39.252410    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:39.252424    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:39.291266    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:39.291278    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:39.295999    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:39.296006    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:39.332049    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:39.332059    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:39.346349    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:53:39.346360    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:53:39.358024    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:39.358033    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:39.369393    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:39.369404    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:39.381385    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:39.381397    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:39.406762    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:39.406773    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:39.421852    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:39.421865    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:39.433859    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:39.433871    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:39.451066    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:39.451075    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:39.468310    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:39.468321    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:41.986264    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:46.988930    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:46.989006    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:47.000157    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:47.000231    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:47.010943    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:47.011012    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:47.022760    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:53:47.022838    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:47.035602    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:47.035670    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:47.047353    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:47.047415    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:47.058515    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:47.058594    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:47.068609    5437 logs.go:276] 0 containers: []
	W0915 11:53:47.068621    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:47.068686    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:47.079500    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:47.079517    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:47.079521    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:47.093613    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:53:47.093622    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:53:47.105329    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:47.105338    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:47.117893    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:47.117902    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:47.135691    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:47.135700    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:47.147688    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:47.147698    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:47.163266    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:47.163280    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:47.176506    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:47.176521    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:47.217167    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:53:47.217186    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:53:47.230786    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:47.230799    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:47.248530    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:47.248544    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:47.262203    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:47.262215    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:47.288771    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:47.288787    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:47.293968    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:47.293980    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:47.312998    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:47.313011    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:49.859793    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:53:54.862157    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:53:54.862287    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:53:54.873648    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:53:54.873725    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:53:54.883886    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:53:54.883967    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:53:54.894468    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:53:54.894551    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:53:54.904722    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:53:54.904801    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:53:54.915293    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:53:54.915374    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:53:54.925289    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:53:54.925365    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:53:54.935823    5437 logs.go:276] 0 containers: []
	W0915 11:53:54.935833    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:53:54.935901    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:53:54.946351    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:53:54.946368    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:53:54.946374    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:53:54.959751    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:53:54.959761    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:53:54.971427    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:53:54.971437    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:53:54.987089    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:53:54.987102    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:53:55.025169    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:53:55.025178    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:53:55.059686    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:53:55.059697    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:53:55.074295    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:53:55.074303    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:53:55.085770    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:53:55.085782    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:53:55.097106    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:53:55.097115    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:53:55.101335    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:53:55.101344    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:53:55.112833    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:53:55.112844    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:53:55.124662    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:53:55.124671    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:53:55.142149    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:53:55.142160    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:53:55.153949    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:53:55.153960    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:53:55.169017    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:53:55.169026    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:53:57.695594    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:02.697963    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:02.698554    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:02.744714    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:02.744875    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:02.780310    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:02.780409    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:02.795790    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:02.795880    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:02.808094    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:02.808169    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:02.821100    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:02.821184    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:02.839202    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:02.839281    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:02.849509    5437 logs.go:276] 0 containers: []
	W0915 11:54:02.849521    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:02.849590    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:02.860085    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:02.860101    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:02.860106    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:02.899021    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:02.899027    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:02.911910    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:02.911920    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:02.923641    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:02.923657    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:02.934994    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:02.935005    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:02.939182    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:02.939192    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:02.952763    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:02.952774    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:02.964801    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:02.964813    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:02.999149    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:02.999162    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:03.013645    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:03.013659    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:03.024778    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:03.024788    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:03.039240    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:03.039251    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:03.051423    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:03.051433    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:03.063451    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:03.063459    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:03.080705    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:03.080712    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:05.608194    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:10.610941    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:10.611110    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:10.623881    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:10.623941    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:10.634950    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:10.635022    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:10.646279    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:10.646346    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:10.657399    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:10.657481    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:10.670633    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:10.670692    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:10.680917    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:10.680994    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:10.692012    5437 logs.go:276] 0 containers: []
	W0915 11:54:10.692025    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:10.692108    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:10.705449    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:10.705483    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:10.705495    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:10.722576    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:10.722586    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:10.735260    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:10.735271    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:10.748506    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:10.748516    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:10.775226    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:10.775237    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:10.779819    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:10.779833    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:10.818263    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:10.818277    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:10.834232    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:10.834243    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:10.847488    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:10.847501    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:10.860334    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:10.860343    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:10.876171    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:10.876183    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:10.895551    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:10.895559    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:10.933993    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:10.934006    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:10.947892    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:10.947906    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:10.961564    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:10.961574    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:13.476508    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:18.477648    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:18.477919    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:18.497726    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:18.497829    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:18.511611    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:18.511703    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:18.523115    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:18.523199    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:18.533397    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:18.533475    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:18.543877    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:18.543954    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:18.554629    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:18.554704    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:18.565044    5437 logs.go:276] 0 containers: []
	W0915 11:54:18.565054    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:18.565118    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:18.575406    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:18.575423    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:18.575428    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:18.613010    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:18.613026    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:18.652879    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:18.652891    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:18.668679    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:18.668696    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:18.673715    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:18.673731    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:18.690294    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:18.690315    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:18.703986    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:18.704002    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:18.717528    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:18.717541    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:18.730646    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:18.730659    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:18.744492    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:18.744526    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:18.769477    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:18.769485    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:18.782175    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:18.782186    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:18.807712    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:18.807720    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:18.819467    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:18.819477    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:18.834950    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:18.834965    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:21.347578    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:26.349779    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:26.350049    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:26.371066    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:26.371192    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:26.385915    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:26.386004    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:26.401844    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:26.401931    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:26.412536    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:26.412609    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:26.422968    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:26.423039    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:26.433902    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:26.433978    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:26.443804    5437 logs.go:276] 0 containers: []
	W0915 11:54:26.443815    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:26.443876    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:26.454223    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:26.454241    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:26.454247    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:26.458494    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:26.458500    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:26.491877    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:26.491888    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:26.509565    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:26.509582    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:26.533689    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:26.533700    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:26.545001    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:26.545011    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:26.582182    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:26.582191    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:26.597192    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:26.597201    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:26.616146    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:26.616156    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:26.627902    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:26.627913    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:26.641628    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:26.641638    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:26.653440    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:26.653451    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:26.665322    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:26.665334    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:26.677376    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:26.677392    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:26.688837    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:26.688847    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:29.202840    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:34.205534    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:34.205618    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:34.218032    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:34.218136    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:34.229042    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:34.229123    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:34.241061    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:34.241158    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:34.252268    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:34.252357    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:34.264273    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:34.264369    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:34.275657    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:34.275753    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:34.286975    5437 logs.go:276] 0 containers: []
	W0915 11:54:34.286987    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:34.287055    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:34.298917    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:34.298932    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:34.298937    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:34.339926    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:34.339947    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:34.357151    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:34.357163    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:34.372330    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:34.372338    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:34.385024    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:34.385037    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:34.398023    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:34.398039    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:34.402906    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:34.402922    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:34.416340    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:34.416349    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:34.428646    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:34.428660    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:34.441981    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:34.441993    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:34.480606    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:34.480617    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:34.497895    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:34.497904    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:34.517067    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:34.517083    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:34.534933    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:34.534948    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:34.560785    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:34.560803    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:37.081332    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:42.084025    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:42.084644    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:42.128192    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:42.128332    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:42.158375    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:42.158478    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:42.178736    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:42.178813    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:42.201277    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:42.201357    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:42.215622    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:42.215689    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:42.226425    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:42.226509    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:42.236572    5437 logs.go:276] 0 containers: []
	W0915 11:54:42.236583    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:42.236645    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:42.247301    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:42.247318    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:42.247323    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:42.284397    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:42.284408    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:42.302480    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:42.302490    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:42.314539    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:42.314549    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:42.339335    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:42.339346    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:42.353714    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:42.353724    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:42.365498    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:42.365511    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:42.377316    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:42.377330    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:42.388786    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:42.388797    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:42.402526    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:42.402538    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:42.414332    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:42.414345    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:42.429104    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:42.429114    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:42.433681    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:42.433686    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:42.468280    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:42.468299    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:42.479897    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:42.479909    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:44.993182    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:49.994834    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:49.995396    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:50.034159    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:50.034310    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:50.056435    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:50.056585    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:50.071842    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:50.071937    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:50.084353    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:50.084434    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:50.095266    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:50.095345    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:50.105806    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:50.105879    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:50.116477    5437 logs.go:276] 0 containers: []
	W0915 11:54:50.116489    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:50.116553    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:50.127297    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:50.127315    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:50.127320    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:50.140895    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:50.140906    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:50.155882    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:50.155892    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:50.167625    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:50.167637    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:50.184845    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:50.184853    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:50.196808    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:50.196818    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:50.210860    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:50.210868    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:50.233566    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:50.233573    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:50.244609    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:50.244619    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:50.283370    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:50.283378    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:50.287717    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:50.287722    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:50.325724    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:50.325742    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:50.342924    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:50.342937    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:50.356654    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:50.356667    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:50.371619    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:50.371631    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:54:52.890009    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:54:57.892307    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:54:57.892368    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 11:54:57.903855    5437 logs.go:276] 1 containers: [f090268f49ba]
	I0915 11:54:57.903923    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 11:54:57.915536    5437 logs.go:276] 1 containers: [57fe56665836]
	I0915 11:54:57.915616    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 11:54:57.928395    5437 logs.go:276] 4 containers: [9c39485ab9c7 bc13e5c7f3c9 e546ba6a48d0 f05f24c58255]
	I0915 11:54:57.928453    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 11:54:57.939006    5437 logs.go:276] 1 containers: [e6f46aa1231d]
	I0915 11:54:57.939088    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 11:54:57.950401    5437 logs.go:276] 1 containers: [4f648b6ab11f]
	I0915 11:54:57.950473    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 11:54:57.962391    5437 logs.go:276] 1 containers: [a92857fd7fe2]
	I0915 11:54:57.962485    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 11:54:57.973505    5437 logs.go:276] 0 containers: []
	W0915 11:54:57.973514    5437 logs.go:278] No container was found matching "kindnet"
	I0915 11:54:57.973564    5437 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0915 11:54:57.984154    5437 logs.go:276] 1 containers: [a99fa8b0bbe4]
	I0915 11:54:57.984168    5437 logs.go:123] Gathering logs for dmesg ...
	I0915 11:54:57.984173    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 11:54:57.988756    5437 logs.go:123] Gathering logs for describe nodes ...
	I0915 11:54:57.988767    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 11:54:58.028809    5437 logs.go:123] Gathering logs for kube-scheduler [e6f46aa1231d] ...
	I0915 11:54:58.028821    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6f46aa1231d"
	I0915 11:54:58.045880    5437 logs.go:123] Gathering logs for container status ...
	I0915 11:54:58.045890    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 11:54:58.058670    5437 logs.go:123] Gathering logs for kube-controller-manager [a92857fd7fe2] ...
	I0915 11:54:58.058682    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a92857fd7fe2"
	I0915 11:54:58.077516    5437 logs.go:123] Gathering logs for Docker ...
	I0915 11:54:58.077531    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 11:54:58.103013    5437 logs.go:123] Gathering logs for kubelet ...
	I0915 11:54:58.103023    5437 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 11:54:58.142767    5437 logs.go:123] Gathering logs for etcd [57fe56665836] ...
	I0915 11:54:58.142786    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57fe56665836"
	I0915 11:54:58.157514    5437 logs.go:123] Gathering logs for coredns [9c39485ab9c7] ...
	I0915 11:54:58.157525    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c39485ab9c7"
	I0915 11:54:58.170722    5437 logs.go:123] Gathering logs for kube-proxy [4f648b6ab11f] ...
	I0915 11:54:58.170735    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f648b6ab11f"
	I0915 11:54:58.183321    5437 logs.go:123] Gathering logs for kube-apiserver [f090268f49ba] ...
	I0915 11:54:58.183337    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f090268f49ba"
	I0915 11:54:58.198576    5437 logs.go:123] Gathering logs for coredns [bc13e5c7f3c9] ...
	I0915 11:54:58.198587    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc13e5c7f3c9"
	I0915 11:54:58.211354    5437 logs.go:123] Gathering logs for coredns [e546ba6a48d0] ...
	I0915 11:54:58.211367    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e546ba6a48d0"
	I0915 11:54:58.224663    5437 logs.go:123] Gathering logs for coredns [f05f24c58255] ...
	I0915 11:54:58.224676    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f05f24c58255"
	I0915 11:54:58.237588    5437 logs.go:123] Gathering logs for storage-provisioner [a99fa8b0bbe4] ...
	I0915 11:54:58.237599    5437 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a99fa8b0bbe4"
	I0915 11:55:00.752158    5437 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0915 11:55:05.754618    5437 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0915 11:55:05.760740    5437 out.go:201] 
	W0915 11:55:05.766806    5437 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0915 11:55:05.766816    5437 out.go:270] * 
	* 
	W0915 11:55:05.767328    5437 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:05.777667    5437 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-515000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.95s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-764000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-764000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.79539275s)

                                                
                                                
-- stdout --
	* [pause-764000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-764000" primary control-plane node in "pause-764000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-764000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-764000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-764000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-764000 -n pause-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-764000 -n pause-764000: exit status 7 (57.15125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-764000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 : exit status 80 (9.774530666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-324000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-324000" primary control-plane node in "NoKubernetes-324000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-324000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-324000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000: exit status 7 (31.717209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239683083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-324000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-324000
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-324000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000: exit status 7 (62.44525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 : exit status 80 (5.236797458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-324000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-324000
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-324000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000: exit status 7 (48.111917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 : exit status 80 (5.264871375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-324000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-324000
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-324000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-324000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-324000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-324000 -n NoKubernetes-324000: exit status 7 (42.190584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-324000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.980412s)

                                                
                                                
-- stdout --
	* [auto-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-271000" primary control-plane node in "auto-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:53:10.558246    5790 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:53:10.558364    5790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:10.558367    5790 out.go:358] Setting ErrFile to fd 2...
	I0915 11:53:10.558370    5790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:10.558505    5790 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:53:10.559608    5790 out.go:352] Setting JSON to false
	I0915 11:53:10.576085    5790 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4953,"bootTime":1726421437,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:53:10.576178    5790 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:53:10.583635    5790 out.go:177] * [auto-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:53:10.590470    5790 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:53:10.590498    5790 notify.go:220] Checking for updates...
	I0915 11:53:10.597384    5790 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:53:10.600417    5790 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:53:10.603458    5790 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:53:10.606453    5790 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:53:10.609431    5790 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:53:10.612921    5790 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:53:10.612988    5790 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:53:10.613037    5790 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:53:10.617437    5790 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:53:10.624461    5790 start.go:297] selected driver: qemu2
	I0915 11:53:10.624467    5790 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:53:10.624473    5790 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:53:10.626664    5790 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:53:10.629414    5790 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:53:10.632524    5790 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:53:10.632548    5790 cni.go:84] Creating CNI manager for ""
	I0915 11:53:10.632571    5790 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:53:10.632576    5790 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:53:10.632606    5790 start.go:340] cluster config:
	{Name:auto-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:53:10.636815    5790 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:53:10.645459    5790 out.go:177] * Starting "auto-271000" primary control-plane node in "auto-271000" cluster
	I0915 11:53:10.649445    5790 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:53:10.649461    5790 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:53:10.649473    5790 cache.go:56] Caching tarball of preloaded images
	I0915 11:53:10.649533    5790 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:53:10.649538    5790 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:53:10.649602    5790 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/auto-271000/config.json ...
	I0915 11:53:10.649613    5790 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/auto-271000/config.json: {Name:mk92daf20918d79e4814f2a4ff02b0c45fa5d505 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:53:10.650001    5790 start.go:360] acquireMachinesLock for auto-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:10.650040    5790 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "auto-271000"
	I0915 11:53:10.650050    5790 start.go:93] Provisioning new machine with config: &{Name:auto-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:10.650087    5790 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:10.653469    5790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:10.668735    5790 start.go:159] libmachine.API.Create for "auto-271000" (driver="qemu2")
	I0915 11:53:10.668760    5790 client.go:168] LocalClient.Create starting
	I0915 11:53:10.668821    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:10.668851    5790 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:10.668861    5790 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:10.668899    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:10.668922    5790 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:10.668932    5790 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:10.669297    5790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:10.853064    5790 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:11.097552    5790 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:11.097565    5790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:11.097777    5790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:11.107645    5790 main.go:141] libmachine: STDOUT: 
	I0915 11:53:11.107666    5790 main.go:141] libmachine: STDERR: 
	I0915 11:53:11.107717    5790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2 +20000M
	I0915 11:53:11.115940    5790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:11.115965    5790 main.go:141] libmachine: STDERR: 
	I0915 11:53:11.115987    5790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:11.115991    5790 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:11.116005    5790 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:11.116030    5790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a8:91:fc:70:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:11.117743    5790 main.go:141] libmachine: STDOUT: 
	I0915 11:53:11.117758    5790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:11.117780    5790 client.go:171] duration metric: took 449.017458ms to LocalClient.Create
	I0915 11:53:13.120001    5790 start.go:128] duration metric: took 2.469894583s to createHost
	I0915 11:53:13.120060    5790 start.go:83] releasing machines lock for "auto-271000", held for 2.470029792s
	W0915 11:53:13.120127    5790 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:13.132153    5790 out.go:177] * Deleting "auto-271000" in qemu2 ...
	W0915 11:53:13.161104    5790 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:13.161127    5790 start.go:729] Will try again in 5 seconds ...
	I0915 11:53:18.163367    5790 start.go:360] acquireMachinesLock for auto-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:18.163957    5790 start.go:364] duration metric: took 459µs to acquireMachinesLock for "auto-271000"
	I0915 11:53:18.164038    5790 start.go:93] Provisioning new machine with config: &{Name:auto-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:18.164346    5790 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:18.175049    5790 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:18.226142    5790 start.go:159] libmachine.API.Create for "auto-271000" (driver="qemu2")
	I0915 11:53:18.226194    5790 client.go:168] LocalClient.Create starting
	I0915 11:53:18.226327    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:18.226392    5790 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:18.226408    5790 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:18.226499    5790 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:18.226546    5790 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:18.226558    5790 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:18.227077    5790 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:18.394701    5790 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:18.453393    5790 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:18.453409    5790 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:18.453622    5790 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:18.462810    5790 main.go:141] libmachine: STDOUT: 
	I0915 11:53:18.462827    5790 main.go:141] libmachine: STDERR: 
	I0915 11:53:18.462881    5790 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2 +20000M
	I0915 11:53:18.470799    5790 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:18.470815    5790 main.go:141] libmachine: STDERR: 
	I0915 11:53:18.470827    5790 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:18.470833    5790 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:18.470844    5790 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:18.470880    5790 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:8b:c9:08:45:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/auto-271000/disk.qcow2
	I0915 11:53:18.472602    5790 main.go:141] libmachine: STDOUT: 
	I0915 11:53:18.472616    5790 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:18.472627    5790 client.go:171] duration metric: took 246.42925ms to LocalClient.Create
	I0915 11:53:20.474541    5790 start.go:128] duration metric: took 2.310190625s to createHost
	I0915 11:53:20.474564    5790 start.go:83] releasing machines lock for "auto-271000", held for 2.310604792s
	W0915 11:53:20.474684    5790 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:20.482960    5790 out.go:201] 
	W0915 11:53:20.491029    5790 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:53:20.491035    5790 out.go:270] * 
	* 
	W0915 11:53:20.491810    5790 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:53:20.501992    5790 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.922413875s)

                                                
                                                
-- stdout --
	* [kindnet-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-271000" primary control-plane node in "kindnet-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:53:22.626623    5901 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:53:22.626742    5901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:22.626746    5901 out.go:358] Setting ErrFile to fd 2...
	I0915 11:53:22.626748    5901 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:22.626898    5901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:53:22.627984    5901 out.go:352] Setting JSON to false
	I0915 11:53:22.645216    5901 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4965,"bootTime":1726421437,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:53:22.645284    5901 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:53:22.650960    5901 out.go:177] * [kindnet-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:53:22.658756    5901 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:53:22.658823    5901 notify.go:220] Checking for updates...
	I0915 11:53:22.665763    5901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:53:22.668770    5901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:53:22.671753    5901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:53:22.674783    5901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:53:22.677795    5901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:53:22.681059    5901 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:53:22.681125    5901 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:53:22.681165    5901 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:53:22.684854    5901 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:53:22.691775    5901 start.go:297] selected driver: qemu2
	I0915 11:53:22.691782    5901 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:53:22.691788    5901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:53:22.694025    5901 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:53:22.696807    5901 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:53:22.699777    5901 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:53:22.699792    5901 cni.go:84] Creating CNI manager for "kindnet"
	I0915 11:53:22.699796    5901 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 11:53:22.699824    5901 start.go:340] cluster config:
	{Name:kindnet-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:53:22.703434    5901 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:53:22.710852    5901 out.go:177] * Starting "kindnet-271000" primary control-plane node in "kindnet-271000" cluster
	I0915 11:53:22.714719    5901 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:53:22.714730    5901 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:53:22.714737    5901 cache.go:56] Caching tarball of preloaded images
	I0915 11:53:22.714784    5901 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:53:22.714788    5901 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:53:22.714841    5901 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kindnet-271000/config.json ...
	I0915 11:53:22.714851    5901 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kindnet-271000/config.json: {Name:mk5a3550e7a979792d3e5607a00c3a2056487052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:53:22.715054    5901 start.go:360] acquireMachinesLock for kindnet-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:22.715084    5901 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "kindnet-271000"
	I0915 11:53:22.715094    5901 start.go:93] Provisioning new machine with config: &{Name:kindnet-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:22.715134    5901 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:22.723723    5901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:22.739682    5901 start.go:159] libmachine.API.Create for "kindnet-271000" (driver="qemu2")
	I0915 11:53:22.739706    5901 client.go:168] LocalClient.Create starting
	I0915 11:53:22.739766    5901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:22.739798    5901 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:22.739808    5901 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:22.739849    5901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:22.739871    5901 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:22.739880    5901 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:22.740218    5901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:22.898291    5901 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:22.985950    5901 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:22.985957    5901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:22.986138    5901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:22.995562    5901 main.go:141] libmachine: STDOUT: 
	I0915 11:53:22.995583    5901 main.go:141] libmachine: STDERR: 
	I0915 11:53:22.995643    5901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2 +20000M
	I0915 11:53:23.003493    5901 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:23.003507    5901 main.go:141] libmachine: STDERR: 
	I0915 11:53:23.003532    5901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:23.003538    5901 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:23.003549    5901 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:23.003574    5901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:5b:f4:af:d7:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:23.005244    5901 main.go:141] libmachine: STDOUT: 
	I0915 11:53:23.005257    5901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:23.005278    5901 client.go:171] duration metric: took 265.568542ms to LocalClient.Create
	I0915 11:53:25.007434    5901 start.go:128] duration metric: took 2.292294167s to createHost
	I0915 11:53:25.007485    5901 start.go:83] releasing machines lock for "kindnet-271000", held for 2.292410958s
	W0915 11:53:25.007544    5901 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:25.014869    5901 out.go:177] * Deleting "kindnet-271000" in qemu2 ...
	W0915 11:53:25.045018    5901 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:25.045049    5901 start.go:729] Will try again in 5 seconds ...
	I0915 11:53:30.047272    5901 start.go:360] acquireMachinesLock for kindnet-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:30.047667    5901 start.go:364] duration metric: took 310.959µs to acquireMachinesLock for "kindnet-271000"
	I0915 11:53:30.047770    5901 start.go:93] Provisioning new machine with config: &{Name:kindnet-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:30.047965    5901 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:30.058959    5901 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:30.101492    5901 start.go:159] libmachine.API.Create for "kindnet-271000" (driver="qemu2")
	I0915 11:53:30.101549    5901 client.go:168] LocalClient.Create starting
	I0915 11:53:30.101664    5901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:30.101724    5901 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:30.101739    5901 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:30.101797    5901 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:30.101874    5901 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:30.101885    5901 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:30.102371    5901 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:30.270415    5901 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:30.450135    5901 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:30.450146    5901 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:30.450366    5901 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:30.460064    5901 main.go:141] libmachine: STDOUT: 
	I0915 11:53:30.460085    5901 main.go:141] libmachine: STDERR: 
	I0915 11:53:30.460139    5901 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2 +20000M
	I0915 11:53:30.468017    5901 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:30.468030    5901 main.go:141] libmachine: STDERR: 
	I0915 11:53:30.468044    5901 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:30.468049    5901 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:30.468056    5901 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:30.468081    5901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:3a:20:e9:0d:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kindnet-271000/disk.qcow2
	I0915 11:53:30.469801    5901 main.go:141] libmachine: STDOUT: 
	I0915 11:53:30.469818    5901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:30.469830    5901 client.go:171] duration metric: took 368.278958ms to LocalClient.Create
	I0915 11:53:32.472168    5901 start.go:128] duration metric: took 2.42418625s to createHost
	I0915 11:53:32.472232    5901 start.go:83] releasing machines lock for "kindnet-271000", held for 2.424565334s
	W0915 11:53:32.472637    5901 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:32.488448    5901 out.go:201] 
	W0915 11:53:32.492451    5901 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:53:32.492497    5901 out.go:270] * 
	* 
	W0915 11:53:32.494827    5901 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:53:32.507413    5901 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.863704209s)

                                                
                                                
-- stdout --
	* [flannel-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-271000" primary control-plane node in "flannel-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:53:34.791215    6026 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:53:34.791343    6026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:34.791346    6026 out.go:358] Setting ErrFile to fd 2...
	I0915 11:53:34.791349    6026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:34.791503    6026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:53:34.792694    6026 out.go:352] Setting JSON to false
	I0915 11:53:34.809148    6026 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4977,"bootTime":1726421437,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:53:34.809218    6026 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:53:34.815290    6026 out.go:177] * [flannel-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:53:34.823280    6026 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:53:34.823370    6026 notify.go:220] Checking for updates...
	I0915 11:53:34.829202    6026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:53:34.832227    6026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:53:34.833442    6026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:53:34.836176    6026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:53:34.839253    6026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:53:34.842533    6026 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:53:34.842592    6026 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:53:34.842643    6026 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:53:34.847220    6026 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:53:34.854261    6026 start.go:297] selected driver: qemu2
	I0915 11:53:34.854268    6026 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:53:34.854274    6026 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:53:34.856599    6026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:53:34.860225    6026 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:53:34.863248    6026 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:53:34.863266    6026 cni.go:84] Creating CNI manager for "flannel"
	I0915 11:53:34.863273    6026 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0915 11:53:34.863308    6026 start.go:340] cluster config:
	{Name:flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:53:34.866691    6026 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:53:34.874204    6026 out.go:177] * Starting "flannel-271000" primary control-plane node in "flannel-271000" cluster
	I0915 11:53:34.878215    6026 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:53:34.878228    6026 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:53:34.878239    6026 cache.go:56] Caching tarball of preloaded images
	I0915 11:53:34.878295    6026 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:53:34.878300    6026 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:53:34.878347    6026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/flannel-271000/config.json ...
	I0915 11:53:34.878358    6026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/flannel-271000/config.json: {Name:mk47c0906165f2a0acdec59710edcf1f9284a1c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:53:34.878659    6026 start.go:360] acquireMachinesLock for flannel-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:34.878697    6026 start.go:364] duration metric: took 31.667µs to acquireMachinesLock for "flannel-271000"
	I0915 11:53:34.878708    6026 start.go:93] Provisioning new machine with config: &{Name:flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:34.878734    6026 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:34.886347    6026 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:34.901674    6026 start.go:159] libmachine.API.Create for "flannel-271000" (driver="qemu2")
	I0915 11:53:34.901700    6026 client.go:168] LocalClient.Create starting
	I0915 11:53:34.901768    6026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:34.901800    6026 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:34.901810    6026 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:34.901845    6026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:34.901872    6026 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:34.901880    6026 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:34.902208    6026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:35.061572    6026 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:35.166517    6026 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:35.166523    6026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:35.166703    6026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:35.176145    6026 main.go:141] libmachine: STDOUT: 
	I0915 11:53:35.176162    6026 main.go:141] libmachine: STDERR: 
	I0915 11:53:35.176219    6026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2 +20000M
	I0915 11:53:35.184146    6026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:35.184163    6026 main.go:141] libmachine: STDERR: 
	I0915 11:53:35.184186    6026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:35.184191    6026 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:35.184203    6026 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:35.184230    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5e:92:ce:60:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:35.185865    6026 main.go:141] libmachine: STDOUT: 
	I0915 11:53:35.185880    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:35.185899    6026 client.go:171] duration metric: took 284.197084ms to LocalClient.Create
	I0915 11:53:37.188090    6026 start.go:128] duration metric: took 2.309347792s to createHost
	I0915 11:53:37.188177    6026 start.go:83] releasing machines lock for "flannel-271000", held for 2.309487417s
	W0915 11:53:37.188232    6026 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:37.202701    6026 out.go:177] * Deleting "flannel-271000" in qemu2 ...
	W0915 11:53:37.239782    6026 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:37.239805    6026 start.go:729] Will try again in 5 seconds ...
	I0915 11:53:42.242035    6026 start.go:360] acquireMachinesLock for flannel-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:42.242572    6026 start.go:364] duration metric: took 428.625µs to acquireMachinesLock for "flannel-271000"
	I0915 11:53:42.242653    6026 start.go:93] Provisioning new machine with config: &{Name:flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:42.242984    6026 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:42.253904    6026 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:42.304883    6026 start.go:159] libmachine.API.Create for "flannel-271000" (driver="qemu2")
	I0915 11:53:42.304939    6026 client.go:168] LocalClient.Create starting
	I0915 11:53:42.305064    6026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:42.305131    6026 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:42.305149    6026 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:42.305208    6026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:42.305253    6026 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:42.305263    6026 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:42.305879    6026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:42.472573    6026 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:42.556574    6026 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:42.556583    6026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:42.556775    6026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:42.566001    6026 main.go:141] libmachine: STDOUT: 
	I0915 11:53:42.566025    6026 main.go:141] libmachine: STDERR: 
	I0915 11:53:42.566108    6026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2 +20000M
	I0915 11:53:42.574018    6026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:42.574039    6026 main.go:141] libmachine: STDERR: 
	I0915 11:53:42.574051    6026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:42.574056    6026 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:42.574072    6026 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:42.574119    6026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:50:9a:d9:ea:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/flannel-271000/disk.qcow2
	I0915 11:53:42.575830    6026 main.go:141] libmachine: STDOUT: 
	I0915 11:53:42.575847    6026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:42.575861    6026 client.go:171] duration metric: took 270.917333ms to LocalClient.Create
	I0915 11:53:44.578067    6026 start.go:128] duration metric: took 2.335057375s to createHost
	I0915 11:53:44.578192    6026 start.go:83] releasing machines lock for "flannel-271000", held for 2.335599416s
	W0915 11:53:44.578602    6026 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:44.597398    6026 out.go:201] 
	W0915 11:53:44.601283    6026 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:53:44.601317    6026 out.go:270] * 
	* 
	W0915 11:53:44.603174    6026 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:53:44.612351    6026 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.799298458s)

                                                
                                                
-- stdout --
	* [enable-default-cni-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-271000" primary control-plane node in "enable-default-cni-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:53:47.041857    6155 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:53:47.042006    6155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:47.042011    6155 out.go:358] Setting ErrFile to fd 2...
	I0915 11:53:47.042013    6155 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:47.042152    6155 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:53:47.043421    6155 out.go:352] Setting JSON to false
	I0915 11:53:47.061972    6155 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4990,"bootTime":1726421437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:53:47.062045    6155 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:53:47.065798    6155 out.go:177] * [enable-default-cni-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:53:47.073937    6155 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:53:47.074061    6155 notify.go:220] Checking for updates...
	I0915 11:53:47.080847    6155 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:53:47.083904    6155 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:53:47.086865    6155 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:53:47.089836    6155 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:53:47.092838    6155 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:53:47.096229    6155 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:53:47.096292    6155 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:53:47.096347    6155 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:53:47.100812    6155 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:53:47.106855    6155 start.go:297] selected driver: qemu2
	I0915 11:53:47.106863    6155 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:53:47.106878    6155 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:53:47.109392    6155 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:53:47.112833    6155 out.go:177] * Automatically selected the socket_vmnet network
	E0915 11:53:47.115918    6155 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0915 11:53:47.115933    6155 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:53:47.115951    6155 cni.go:84] Creating CNI manager for "bridge"
	I0915 11:53:47.115962    6155 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:53:47.115996    6155 start.go:340] cluster config:
	{Name:enable-default-cni-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:53:47.120041    6155 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:53:47.123813    6155 out.go:177] * Starting "enable-default-cni-271000" primary control-plane node in "enable-default-cni-271000" cluster
	I0915 11:53:47.131810    6155 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:53:47.131843    6155 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:53:47.131861    6155 cache.go:56] Caching tarball of preloaded images
	I0915 11:53:47.131953    6155 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:53:47.131959    6155 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:53:47.132014    6155 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/enable-default-cni-271000/config.json ...
	I0915 11:53:47.132026    6155 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/enable-default-cni-271000/config.json: {Name:mk9edbe868df01c728eb361fc39be5370ef60277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:53:47.132259    6155 start.go:360] acquireMachinesLock for enable-default-cni-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:47.132296    6155 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "enable-default-cni-271000"
	I0915 11:53:47.132307    6155 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:47.132342    6155 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:47.139889    6155 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:47.156422    6155 start.go:159] libmachine.API.Create for "enable-default-cni-271000" (driver="qemu2")
	I0915 11:53:47.156460    6155 client.go:168] LocalClient.Create starting
	I0915 11:53:47.156534    6155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:47.156571    6155 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:47.156580    6155 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:47.156620    6155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:47.156642    6155 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:47.156651    6155 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:47.157047    6155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:47.316875    6155 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:47.371243    6155 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:47.371254    6155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:47.371469    6155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:47.381178    6155 main.go:141] libmachine: STDOUT: 
	I0915 11:53:47.381204    6155 main.go:141] libmachine: STDERR: 
	I0915 11:53:47.381262    6155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2 +20000M
	I0915 11:53:47.389527    6155 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:47.389547    6155 main.go:141] libmachine: STDERR: 
	I0915 11:53:47.389562    6155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:47.389567    6155 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:47.389577    6155 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:47.389604    6155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:46:81:dc:5c:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:47.391349    6155 main.go:141] libmachine: STDOUT: 
	I0915 11:53:47.391367    6155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:47.391388    6155 client.go:171] duration metric: took 234.922208ms to LocalClient.Create
	I0915 11:53:49.393544    6155 start.go:128] duration metric: took 2.261192167s to createHost
	I0915 11:53:49.393600    6155 start.go:83] releasing machines lock for "enable-default-cni-271000", held for 2.261313459s
	W0915 11:53:49.393639    6155 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:49.405465    6155 out.go:177] * Deleting "enable-default-cni-271000" in qemu2 ...
	W0915 11:53:49.432505    6155 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:49.432538    6155 start.go:729] Will try again in 5 seconds ...
	I0915 11:53:54.434740    6155 start.go:360] acquireMachinesLock for enable-default-cni-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:54.435209    6155 start.go:364] duration metric: took 369.541µs to acquireMachinesLock for "enable-default-cni-271000"
	I0915 11:53:54.435322    6155 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:54.435517    6155 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:54.440135    6155 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:54.490066    6155 start.go:159] libmachine.API.Create for "enable-default-cni-271000" (driver="qemu2")
	I0915 11:53:54.490137    6155 client.go:168] LocalClient.Create starting
	I0915 11:53:54.490260    6155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:54.490321    6155 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:54.490337    6155 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:54.490392    6155 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:54.490442    6155 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:54.490451    6155 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:54.490964    6155 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:54.657486    6155 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:54.737173    6155 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:54.737183    6155 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:54.737380    6155 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:54.746824    6155 main.go:141] libmachine: STDOUT: 
	I0915 11:53:54.746839    6155 main.go:141] libmachine: STDERR: 
	I0915 11:53:54.746904    6155 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2 +20000M
	I0915 11:53:54.754902    6155 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:54.754922    6155 main.go:141] libmachine: STDERR: 
	I0915 11:53:54.754940    6155 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:54.754946    6155 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:54.754954    6155 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:54.754979    6155 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:74:f3:b4:a6:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/enable-default-cni-271000/disk.qcow2
	I0915 11:53:54.756670    6155 main.go:141] libmachine: STDOUT: 
	I0915 11:53:54.756686    6155 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:54.756717    6155 client.go:171] duration metric: took 266.576958ms to LocalClient.Create
	I0915 11:53:56.758908    6155 start.go:128] duration metric: took 2.323375666s to createHost
	I0915 11:53:56.758981    6155 start.go:83] releasing machines lock for "enable-default-cni-271000", held for 2.323768042s
	W0915 11:53:56.759362    6155 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:53:56.776032    6155 out.go:201] 
	W0915 11:53:56.779087    6155 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:53:56.779144    6155 out.go:270] * 
	* 
	W0915 11:53:56.781625    6155 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:53:56.794911    6155 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.852648792s)

                                                
                                                
-- stdout --
	* [bridge-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-271000" primary control-plane node in "bridge-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:53:59.009415    6277 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:53:59.009564    6277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:59.009567    6277 out.go:358] Setting ErrFile to fd 2...
	I0915 11:53:59.009569    6277 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:53:59.009711    6277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:53:59.010907    6277 out.go:352] Setting JSON to false
	I0915 11:53:59.027289    6277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5002,"bootTime":1726421437,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:53:59.027357    6277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:53:59.034606    6277 out.go:177] * [bridge-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:53:59.043490    6277 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:53:59.043543    6277 notify.go:220] Checking for updates...
	I0915 11:53:59.049536    6277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:53:59.052545    6277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:53:59.055481    6277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:53:59.058565    6277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:53:59.061505    6277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:53:59.064791    6277 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:53:59.064859    6277 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:53:59.064908    6277 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:53:59.068508    6277 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:53:59.075495    6277 start.go:297] selected driver: qemu2
	I0915 11:53:59.075501    6277 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:53:59.075508    6277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:53:59.077569    6277 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:53:59.080571    6277 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:53:59.083594    6277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:53:59.083618    6277 cni.go:84] Creating CNI manager for "bridge"
	I0915 11:53:59.083626    6277 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:53:59.083656    6277 start.go:340] cluster config:
	{Name:bridge-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:53:59.087175    6277 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:53:59.094480    6277 out.go:177] * Starting "bridge-271000" primary control-plane node in "bridge-271000" cluster
	I0915 11:53:59.098461    6277 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:53:59.098473    6277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:53:59.098482    6277 cache.go:56] Caching tarball of preloaded images
	I0915 11:53:59.098532    6277 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:53:59.098537    6277 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:53:59.098598    6277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/bridge-271000/config.json ...
	I0915 11:53:59.098609    6277 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/bridge-271000/config.json: {Name:mkea105f49865debc70a1af10b50a33ee63107d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:53:59.098811    6277 start.go:360] acquireMachinesLock for bridge-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:53:59.098843    6277 start.go:364] duration metric: took 26.916µs to acquireMachinesLock for "bridge-271000"
	I0915 11:53:59.098854    6277 start.go:93] Provisioning new machine with config: &{Name:bridge-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:53:59.098877    6277 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:53:59.107466    6277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:53:59.124303    6277 start.go:159] libmachine.API.Create for "bridge-271000" (driver="qemu2")
	I0915 11:53:59.124342    6277 client.go:168] LocalClient.Create starting
	I0915 11:53:59.124407    6277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:53:59.124436    6277 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:59.124446    6277 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:59.124481    6277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:53:59.124503    6277 main.go:141] libmachine: Decoding PEM data...
	I0915 11:53:59.124513    6277 main.go:141] libmachine: Parsing certificate...
	I0915 11:53:59.124876    6277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:53:59.286943    6277 main.go:141] libmachine: Creating SSH key...
	I0915 11:53:59.386867    6277 main.go:141] libmachine: Creating Disk image...
	I0915 11:53:59.386877    6277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:53:59.387083    6277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:53:59.396525    6277 main.go:141] libmachine: STDOUT: 
	I0915 11:53:59.396549    6277 main.go:141] libmachine: STDERR: 
	I0915 11:53:59.396598    6277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2 +20000M
	I0915 11:53:59.404506    6277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:53:59.404529    6277 main.go:141] libmachine: STDERR: 
	I0915 11:53:59.404548    6277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:53:59.404554    6277 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:53:59.404564    6277 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:53:59.404592    6277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:c9:38:97:2c:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:53:59.406232    6277 main.go:141] libmachine: STDOUT: 
	I0915 11:53:59.406246    6277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:53:59.406265    6277 client.go:171] duration metric: took 281.919042ms to LocalClient.Create
	I0915 11:54:01.408477    6277 start.go:128] duration metric: took 2.309584833s to createHost
	I0915 11:54:01.408582    6277 start.go:83] releasing machines lock for "bridge-271000", held for 2.309746458s
	W0915 11:54:01.408632    6277 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:01.420954    6277 out.go:177] * Deleting "bridge-271000" in qemu2 ...
	W0915 11:54:01.453911    6277 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:01.453938    6277 start.go:729] Will try again in 5 seconds ...
	I0915 11:54:06.456040    6277 start.go:360] acquireMachinesLock for bridge-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:06.456315    6277 start.go:364] duration metric: took 217.125µs to acquireMachinesLock for "bridge-271000"
	I0915 11:54:06.456355    6277 start.go:93] Provisioning new machine with config: &{Name:bridge-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:06.456524    6277 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:06.461727    6277 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:06.497177    6277 start.go:159] libmachine.API.Create for "bridge-271000" (driver="qemu2")
	I0915 11:54:06.497222    6277 client.go:168] LocalClient.Create starting
	I0915 11:54:06.497358    6277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:06.497417    6277 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:06.497430    6277 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:06.497488    6277 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:06.497527    6277 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:06.497535    6277 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:06.498014    6277 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:06.664999    6277 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:06.757456    6277 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:06.757464    6277 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:06.757650    6277 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:54:06.767540    6277 main.go:141] libmachine: STDOUT: 
	I0915 11:54:06.767562    6277 main.go:141] libmachine: STDERR: 
	I0915 11:54:06.767618    6277 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2 +20000M
	I0915 11:54:06.775641    6277 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:06.775657    6277 main.go:141] libmachine: STDERR: 
	I0915 11:54:06.775674    6277 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:54:06.775679    6277 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:06.775688    6277 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:06.775715    6277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:13:1e:39:49:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/bridge-271000/disk.qcow2
	I0915 11:54:06.777374    6277 main.go:141] libmachine: STDOUT: 
	I0915 11:54:06.777388    6277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:06.777400    6277 client.go:171] duration metric: took 280.175167ms to LocalClient.Create
	I0915 11:54:08.779591    6277 start.go:128] duration metric: took 2.3230505s to createHost
	I0915 11:54:08.779704    6277 start.go:83] releasing machines lock for "bridge-271000", held for 2.323384041s
	W0915 11:54:08.780098    6277 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:08.794748    6277 out.go:201] 
	W0915 11:54:08.799822    6277 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:54:08.799850    6277 out.go:270] * 
	* 
	W0915 11:54:08.802543    6277 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:54:08.820765    6277 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0915 11:54:13.150782    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.975994292s)

                                                
                                                
-- stdout --
	* [kubenet-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-271000" primary control-plane node in "kubenet-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:54:11.048156    6394 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:54:11.048276    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:11.048280    6394 out.go:358] Setting ErrFile to fd 2...
	I0915 11:54:11.048282    6394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:11.048423    6394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:54:11.049547    6394 out.go:352] Setting JSON to false
	I0915 11:54:11.066149    6394 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5014,"bootTime":1726421437,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:54:11.066215    6394 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:54:11.073177    6394 out.go:177] * [kubenet-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:54:11.081019    6394 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:54:11.081102    6394 notify.go:220] Checking for updates...
	I0915 11:54:11.088921    6394 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:54:11.091966    6394 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:54:11.097850    6394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:54:11.100908    6394 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:54:11.103902    6394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:54:11.107241    6394 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:54:11.107305    6394 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:54:11.107345    6394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:54:11.111947    6394 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:54:11.118906    6394 start.go:297] selected driver: qemu2
	I0915 11:54:11.118918    6394 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:54:11.118924    6394 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:54:11.121261    6394 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:54:11.123915    6394 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:54:11.127014    6394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:54:11.127032    6394 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0915 11:54:11.127057    6394 start.go:340] cluster config:
	{Name:kubenet-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:54:11.130728    6394 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:54:11.137980    6394 out.go:177] * Starting "kubenet-271000" primary control-plane node in "kubenet-271000" cluster
	I0915 11:54:11.141790    6394 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:54:11.141802    6394 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:54:11.141810    6394 cache.go:56] Caching tarball of preloaded images
	I0915 11:54:11.141871    6394 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:54:11.141876    6394 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:54:11.141932    6394 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kubenet-271000/config.json ...
	I0915 11:54:11.141942    6394 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/kubenet-271000/config.json: {Name:mk68dc280ca14830dda8ec8b7ac287f7cdbfd654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:54:11.142340    6394 start.go:360] acquireMachinesLock for kubenet-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:11.142373    6394 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "kubenet-271000"
	I0915 11:54:11.142386    6394 start.go:93] Provisioning new machine with config: &{Name:kubenet-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:11.142423    6394 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:11.145964    6394 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:11.161240    6394 start.go:159] libmachine.API.Create for "kubenet-271000" (driver="qemu2")
	I0915 11:54:11.161262    6394 client.go:168] LocalClient.Create starting
	I0915 11:54:11.161319    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:11.161347    6394 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:11.161356    6394 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:11.161392    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:11.161415    6394 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:11.161422    6394 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:11.161716    6394 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:11.321250    6394 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:11.507653    6394 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:11.507665    6394 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:11.507869    6394 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:11.517574    6394 main.go:141] libmachine: STDOUT: 
	I0915 11:54:11.517604    6394 main.go:141] libmachine: STDERR: 
	I0915 11:54:11.517691    6394 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2 +20000M
	I0915 11:54:11.525616    6394 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:11.525633    6394 main.go:141] libmachine: STDERR: 
	I0915 11:54:11.525650    6394 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:11.525655    6394 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:11.525668    6394 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:11.525691    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:0f:e9:ea:b2:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:11.527383    6394 main.go:141] libmachine: STDOUT: 
	I0915 11:54:11.527397    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:11.527416    6394 client.go:171] duration metric: took 366.153209ms to LocalClient.Create
	I0915 11:54:13.529606    6394 start.go:128] duration metric: took 2.38717575s to createHost
	I0915 11:54:13.529670    6394 start.go:83] releasing machines lock for "kubenet-271000", held for 2.387304916s
	W0915 11:54:13.529721    6394 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:13.545411    6394 out.go:177] * Deleting "kubenet-271000" in qemu2 ...
	W0915 11:54:13.575007    6394 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:13.575035    6394 start.go:729] Will try again in 5 seconds ...
	I0915 11:54:18.577135    6394 start.go:360] acquireMachinesLock for kubenet-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:18.577238    6394 start.go:364] duration metric: took 85.834µs to acquireMachinesLock for "kubenet-271000"
	I0915 11:54:18.577251    6394 start.go:93] Provisioning new machine with config: &{Name:kubenet-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:18.577299    6394 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:18.585521    6394 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:18.601243    6394 start.go:159] libmachine.API.Create for "kubenet-271000" (driver="qemu2")
	I0915 11:54:18.601274    6394 client.go:168] LocalClient.Create starting
	I0915 11:54:18.601332    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:18.601365    6394 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:18.601371    6394 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:18.601404    6394 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:18.601434    6394 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:18.601441    6394 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:18.601740    6394 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:18.763538    6394 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:18.931284    6394 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:18.931298    6394 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:18.931503    6394 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:18.940744    6394 main.go:141] libmachine: STDOUT: 
	I0915 11:54:18.940773    6394 main.go:141] libmachine: STDERR: 
	I0915 11:54:18.940841    6394 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2 +20000M
	I0915 11:54:18.948977    6394 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:18.948995    6394 main.go:141] libmachine: STDERR: 
	I0915 11:54:18.949018    6394 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:18.949024    6394 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:18.949032    6394 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:18.949061    6394 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:9a:83:22:a2:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/kubenet-271000/disk.qcow2
	I0915 11:54:18.950850    6394 main.go:141] libmachine: STDOUT: 
	I0915 11:54:18.950866    6394 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:18.950879    6394 client.go:171] duration metric: took 349.603958ms to LocalClient.Create
	I0915 11:54:20.953202    6394 start.go:128] duration metric: took 2.37589s to createHost
	I0915 11:54:20.953298    6394 start.go:83] releasing machines lock for "kubenet-271000", held for 2.376067709s
	W0915 11:54:20.953721    6394 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:20.959885    6394 out.go:201] 
	W0915 11:54:20.969913    6394 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:54:20.969961    6394 out.go:270] * 
	* 
	W0915 11:54:20.972293    6394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:54:20.982794    6394 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0915 11:54:29.810847    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.902171166s)

                                                
                                                
-- stdout --
	* [custom-flannel-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-271000" primary control-plane node in "custom-flannel-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:54:23.203974    6509 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:54:23.204131    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:23.204134    6509 out.go:358] Setting ErrFile to fd 2...
	I0915 11:54:23.204136    6509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:23.204280    6509 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:54:23.205358    6509 out.go:352] Setting JSON to false
	I0915 11:54:23.221727    6509 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5026,"bootTime":1726421437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:54:23.221811    6509 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:54:23.228110    6509 out.go:177] * [custom-flannel-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:54:23.236047    6509 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:54:23.236097    6509 notify.go:220] Checking for updates...
	I0915 11:54:23.242962    6509 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:54:23.245959    6509 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:54:23.248998    6509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:54:23.252027    6509 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:54:23.254957    6509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:54:23.258328    6509 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:54:23.258401    6509 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:54:23.258442    6509 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:54:23.261953    6509 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:54:23.268990    6509 start.go:297] selected driver: qemu2
	I0915 11:54:23.268995    6509 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:54:23.269001    6509 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:54:23.271346    6509 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:54:23.273949    6509 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:54:23.277029    6509 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:54:23.277047    6509 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0915 11:54:23.277054    6509 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0915 11:54:23.277083    6509 start.go:340] cluster config:
	{Name:custom-flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:54:23.280634    6509 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:54:23.287987    6509 out.go:177] * Starting "custom-flannel-271000" primary control-plane node in "custom-flannel-271000" cluster
	I0915 11:54:23.291999    6509 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:54:23.292013    6509 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:54:23.292020    6509 cache.go:56] Caching tarball of preloaded images
	I0915 11:54:23.292076    6509 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:54:23.292082    6509 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:54:23.292138    6509 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/custom-flannel-271000/config.json ...
	I0915 11:54:23.292150    6509 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/custom-flannel-271000/config.json: {Name:mk0a19b5bd5abf833b070833cbf73094c83ab4c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:54:23.292365    6509 start.go:360] acquireMachinesLock for custom-flannel-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:23.292403    6509 start.go:364] duration metric: took 29.416µs to acquireMachinesLock for "custom-flannel-271000"
	I0915 11:54:23.292416    6509 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:23.292452    6509 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:23.299972    6509 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:23.317074    6509 start.go:159] libmachine.API.Create for "custom-flannel-271000" (driver="qemu2")
	I0915 11:54:23.317107    6509 client.go:168] LocalClient.Create starting
	I0915 11:54:23.317174    6509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:23.317206    6509 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:23.317219    6509 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:23.317257    6509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:23.317281    6509 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:23.317286    6509 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:23.317639    6509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:23.476259    6509 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:23.582404    6509 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:23.582410    6509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:23.582591    6509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:23.591910    6509 main.go:141] libmachine: STDOUT: 
	I0915 11:54:23.591931    6509 main.go:141] libmachine: STDERR: 
	I0915 11:54:23.591993    6509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2 +20000M
	I0915 11:54:23.600287    6509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:23.600303    6509 main.go:141] libmachine: STDERR: 
	I0915 11:54:23.600320    6509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:23.600328    6509 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:23.600343    6509 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:23.600370    6509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:2b:c5:db:d3:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:23.602144    6509 main.go:141] libmachine: STDOUT: 
	I0915 11:54:23.602160    6509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:23.602182    6509 client.go:171] duration metric: took 285.069625ms to LocalClient.Create
	I0915 11:54:25.604456    6509 start.go:128] duration metric: took 2.311916458s to createHost
	I0915 11:54:25.604548    6509 start.go:83] releasing machines lock for "custom-flannel-271000", held for 2.312152208s
	W0915 11:54:25.604599    6509 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:25.616034    6509 out.go:177] * Deleting "custom-flannel-271000" in qemu2 ...
	W0915 11:54:25.653099    6509 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:25.653133    6509 start.go:729] Will try again in 5 seconds ...
	I0915 11:54:30.655279    6509 start.go:360] acquireMachinesLock for custom-flannel-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:30.655775    6509 start.go:364] duration metric: took 410.667µs to acquireMachinesLock for "custom-flannel-271000"
	I0915 11:54:30.655905    6509 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:30.656205    6509 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:30.669568    6509 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:30.721754    6509 start.go:159] libmachine.API.Create for "custom-flannel-271000" (driver="qemu2")
	I0915 11:54:30.721802    6509 client.go:168] LocalClient.Create starting
	I0915 11:54:30.721932    6509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:30.722010    6509 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:30.722026    6509 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:30.722096    6509 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:30.722145    6509 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:30.722161    6509 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:30.722727    6509 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:30.890645    6509 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:31.012111    6509 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:31.012117    6509 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:31.012315    6509 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:31.021648    6509 main.go:141] libmachine: STDOUT: 
	I0915 11:54:31.021664    6509 main.go:141] libmachine: STDERR: 
	I0915 11:54:31.021740    6509 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2 +20000M
	I0915 11:54:31.029620    6509 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:31.029634    6509 main.go:141] libmachine: STDERR: 
	I0915 11:54:31.029647    6509 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:31.029652    6509 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:31.029662    6509 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:31.029689    6509 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:6c:c5:4d:3b:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/custom-flannel-271000/disk.qcow2
	I0915 11:54:31.031382    6509 main.go:141] libmachine: STDOUT: 
	I0915 11:54:31.031396    6509 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:31.031408    6509 client.go:171] duration metric: took 309.601ms to LocalClient.Create
	I0915 11:54:33.033481    6509 start.go:128] duration metric: took 2.377274458s to createHost
	I0915 11:54:33.033521    6509 start.go:83] releasing machines lock for "custom-flannel-271000", held for 2.377740083s
	W0915 11:54:33.033647    6509 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:33.050993    6509 out.go:201] 
	W0915 11:54:33.055010    6509 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:54:33.055021    6509 out.go:270] * 
	* 
	W0915 11:54:33.055511    6509 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:54:33.064966    6509 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.880619542s)

                                                
                                                
-- stdout --
	* [calico-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-271000" primary control-plane node in "calico-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:54:35.459171    6632 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:54:35.459304    6632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:35.459306    6632 out.go:358] Setting ErrFile to fd 2...
	I0915 11:54:35.459309    6632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:35.459445    6632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:54:35.460540    6632 out.go:352] Setting JSON to false
	I0915 11:54:35.476973    6632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5038,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:54:35.477059    6632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:54:35.482085    6632 out.go:177] * [calico-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:54:35.489925    6632 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:54:35.489984    6632 notify.go:220] Checking for updates...
	I0915 11:54:35.496863    6632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:54:35.499882    6632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:54:35.502944    6632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:54:35.504363    6632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:54:35.507890    6632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:54:35.511229    6632 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:54:35.511294    6632 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:54:35.511346    6632 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:54:35.515769    6632 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:54:35.522877    6632 start.go:297] selected driver: qemu2
	I0915 11:54:35.522883    6632 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:54:35.522889    6632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:54:35.525213    6632 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:54:35.527905    6632 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:54:35.531092    6632 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:54:35.531110    6632 cni.go:84] Creating CNI manager for "calico"
	I0915 11:54:35.531115    6632 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0915 11:54:35.531149    6632 start.go:340] cluster config:
	{Name:calico-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:54:35.534887    6632 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:54:35.540916    6632 out.go:177] * Starting "calico-271000" primary control-plane node in "calico-271000" cluster
	I0915 11:54:35.544870    6632 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:54:35.544885    6632 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:54:35.544896    6632 cache.go:56] Caching tarball of preloaded images
	I0915 11:54:35.544950    6632 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:54:35.544955    6632 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:54:35.545013    6632 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/calico-271000/config.json ...
	I0915 11:54:35.545024    6632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/calico-271000/config.json: {Name:mk30c0d33a0113b167c23701d2fa73f04868d0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:54:35.545415    6632 start.go:360] acquireMachinesLock for calico-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:35.545450    6632 start.go:364] duration metric: took 28.875µs to acquireMachinesLock for "calico-271000"
	I0915 11:54:35.545460    6632 start.go:93] Provisioning new machine with config: &{Name:calico-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:35.545493    6632 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:35.552928    6632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:35.568706    6632 start.go:159] libmachine.API.Create for "calico-271000" (driver="qemu2")
	I0915 11:54:35.568738    6632 client.go:168] LocalClient.Create starting
	I0915 11:54:35.568804    6632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:35.568837    6632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:35.568845    6632 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:35.568898    6632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:35.568922    6632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:35.568929    6632 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:35.569242    6632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:35.729830    6632 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:35.770935    6632 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:35.770941    6632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:35.771115    6632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:35.780448    6632 main.go:141] libmachine: STDOUT: 
	I0915 11:54:35.780465    6632 main.go:141] libmachine: STDERR: 
	I0915 11:54:35.780531    6632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2 +20000M
	I0915 11:54:35.788633    6632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:35.788649    6632 main.go:141] libmachine: STDERR: 
	I0915 11:54:35.788663    6632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:35.788668    6632 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:35.788682    6632 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:35.788706    6632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:a1:a8:be:72:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:35.790368    6632 main.go:141] libmachine: STDOUT: 
	I0915 11:54:35.790386    6632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:35.790407    6632 client.go:171] duration metric: took 221.665333ms to LocalClient.Create
	I0915 11:54:37.792606    6632 start.go:128] duration metric: took 2.247094042s to createHost
	I0915 11:54:37.792673    6632 start.go:83] releasing machines lock for "calico-271000", held for 2.247230792s
	W0915 11:54:37.792744    6632 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:37.808843    6632 out.go:177] * Deleting "calico-271000" in qemu2 ...
	W0915 11:54:37.838433    6632 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:37.838467    6632 start.go:729] Will try again in 5 seconds ...
	I0915 11:54:42.840543    6632 start.go:360] acquireMachinesLock for calico-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:42.840776    6632 start.go:364] duration metric: took 188.292µs to acquireMachinesLock for "calico-271000"
	I0915 11:54:42.840809    6632 start.go:93] Provisioning new machine with config: &{Name:calico-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:42.840937    6632 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:42.850061    6632 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:42.883884    6632 start.go:159] libmachine.API.Create for "calico-271000" (driver="qemu2")
	I0915 11:54:42.883927    6632 client.go:168] LocalClient.Create starting
	I0915 11:54:42.884043    6632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:42.884105    6632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:42.884123    6632 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:42.884191    6632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:42.884232    6632 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:42.884243    6632 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:42.884672    6632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:43.050039    6632 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:43.245263    6632 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:43.245273    6632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:43.245498    6632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:43.255247    6632 main.go:141] libmachine: STDOUT: 
	I0915 11:54:43.255269    6632 main.go:141] libmachine: STDERR: 
	I0915 11:54:43.255349    6632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2 +20000M
	I0915 11:54:43.263292    6632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:43.263309    6632 main.go:141] libmachine: STDERR: 
	I0915 11:54:43.263321    6632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:43.263326    6632 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:43.263336    6632 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:43.263374    6632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b3:87:c1:d8:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/calico-271000/disk.qcow2
	I0915 11:54:43.265051    6632 main.go:141] libmachine: STDOUT: 
	I0915 11:54:43.265067    6632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:43.265083    6632 client.go:171] duration metric: took 381.153958ms to LocalClient.Create
	I0915 11:54:45.267265    6632 start.go:128] duration metric: took 2.426319667s to createHost
	I0915 11:54:45.267330    6632 start.go:83] releasing machines lock for "calico-271000", held for 2.426556833s
	W0915 11:54:45.267686    6632 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:45.277360    6632 out.go:201] 
	W0915 11:54:45.285576    6632 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:54:45.285664    6632 out.go:270] * 
	* 
	W0915 11:54:45.288534    6632 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:54:45.298431    6632 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-271000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.853151875s)

                                                
                                                
-- stdout --
	* [false-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-271000" primary control-plane node in "false-271000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-271000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:54:47.712920    6765 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:54:47.713087    6765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:47.713090    6765 out.go:358] Setting ErrFile to fd 2...
	I0915 11:54:47.713093    6765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:47.713220    6765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:54:47.714265    6765 out.go:352] Setting JSON to false
	I0915 11:54:47.730619    6765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5050,"bootTime":1726421437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:54:47.730693    6765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:54:47.735789    6765 out.go:177] * [false-271000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:54:47.743757    6765 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:54:47.743855    6765 notify.go:220] Checking for updates...
	I0915 11:54:47.749706    6765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:54:47.752689    6765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:54:47.754267    6765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:54:47.757689    6765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:54:47.760676    6765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:54:47.764029    6765 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:54:47.764094    6765 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:54:47.764146    6765 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:54:47.767695    6765 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:54:47.774664    6765 start.go:297] selected driver: qemu2
	I0915 11:54:47.774670    6765 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:54:47.774676    6765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:54:47.777140    6765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:54:47.779713    6765 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:54:47.782783    6765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:54:47.782805    6765 cni.go:84] Creating CNI manager for "false"
	I0915 11:54:47.782835    6765 start.go:340] cluster config:
	{Name:false-271000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:54:47.786461    6765 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:54:47.793705    6765 out.go:177] * Starting "false-271000" primary control-plane node in "false-271000" cluster
	I0915 11:54:47.797733    6765 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:54:47.797751    6765 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:54:47.797765    6765 cache.go:56] Caching tarball of preloaded images
	I0915 11:54:47.797847    6765 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:54:47.797853    6765 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:54:47.797917    6765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/false-271000/config.json ...
	I0915 11:54:47.797935    6765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/false-271000/config.json: {Name:mke40555032b26cca67a2b727a6de05d3bdb6e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:54:47.798148    6765 start.go:360] acquireMachinesLock for false-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:47.798184    6765 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "false-271000"
	I0915 11:54:47.798196    6765 start.go:93] Provisioning new machine with config: &{Name:false-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:47.798223    6765 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:47.805681    6765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:47.821571    6765 start.go:159] libmachine.API.Create for "false-271000" (driver="qemu2")
	I0915 11:54:47.821607    6765 client.go:168] LocalClient.Create starting
	I0915 11:54:47.821673    6765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:47.821705    6765 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:47.821715    6765 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:47.821755    6765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:47.821777    6765 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:47.821784    6765 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:47.822118    6765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:47.983108    6765 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:48.142995    6765 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:48.143004    6765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:48.143219    6765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:48.152948    6765 main.go:141] libmachine: STDOUT: 
	I0915 11:54:48.152967    6765 main.go:141] libmachine: STDERR: 
	I0915 11:54:48.153020    6765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2 +20000M
	I0915 11:54:48.161179    6765 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:48.161194    6765 main.go:141] libmachine: STDERR: 
	I0915 11:54:48.161216    6765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:48.161220    6765 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:48.161232    6765 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:48.161260    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:9a:0c:a5:b4:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:48.162931    6765 main.go:141] libmachine: STDOUT: 
	I0915 11:54:48.162944    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:48.162962    6765 client.go:171] duration metric: took 341.351916ms to LocalClient.Create
	I0915 11:54:50.165037    6765 start.go:128] duration metric: took 2.366823125s to createHost
	I0915 11:54:50.165063    6765 start.go:83] releasing machines lock for "false-271000", held for 2.366892459s
	W0915 11:54:50.165076    6765 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:50.169585    6765 out.go:177] * Deleting "false-271000" in qemu2 ...
	W0915 11:54:50.183680    6765 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:50.183693    6765 start.go:729] Will try again in 5 seconds ...
	I0915 11:54:55.185863    6765 start.go:360] acquireMachinesLock for false-271000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:55.186419    6765 start.go:364] duration metric: took 458.125µs to acquireMachinesLock for "false-271000"
	I0915 11:54:55.186619    6765 start.go:93] Provisioning new machine with config: &{Name:false-271000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-271000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:55.186856    6765 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:55.198563    6765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0915 11:54:55.248013    6765 start.go:159] libmachine.API.Create for "false-271000" (driver="qemu2")
	I0915 11:54:55.248064    6765 client.go:168] LocalClient.Create starting
	I0915 11:54:55.248215    6765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:55.248285    6765 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:55.248306    6765 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:55.248374    6765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:55.248431    6765 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:55.248445    6765 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:55.248985    6765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:54:55.415753    6765 main.go:141] libmachine: Creating SSH key...
	I0915 11:54:55.466552    6765 main.go:141] libmachine: Creating Disk image...
	I0915 11:54:55.466564    6765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:54:55.466745    6765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:55.476117    6765 main.go:141] libmachine: STDOUT: 
	I0915 11:54:55.476156    6765 main.go:141] libmachine: STDERR: 
	I0915 11:54:55.476206    6765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2 +20000M
	I0915 11:54:55.484225    6765 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:54:55.484238    6765 main.go:141] libmachine: STDERR: 
	I0915 11:54:55.484251    6765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:55.484256    6765 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:54:55.484265    6765 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:54:55.484295    6765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b1:64:87:97:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/false-271000/disk.qcow2
	I0915 11:54:55.485995    6765 main.go:141] libmachine: STDOUT: 
	I0915 11:54:55.486009    6765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:54:55.486021    6765 client.go:171] duration metric: took 237.953ms to LocalClient.Create
	I0915 11:54:57.488217    6765 start.go:128] duration metric: took 2.301313375s to createHost
	I0915 11:54:57.488314    6765 start.go:83] releasing machines lock for "false-271000", held for 2.301834208s
	W0915 11:54:57.488812    6765 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-271000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:54:57.503665    6765 out.go:201] 
	W0915 11:54:57.506654    6765 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:54:57.506683    6765 out.go:270] * 
	* 
	W0915 11:54:57.509070    6765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:54:57.522524    6765 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.89313325s)

                                                
                                                
-- stdout --
	* [old-k8s-version-634000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:54:59.762237    6887 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:54:59.762350    6887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:59.762352    6887 out.go:358] Setting ErrFile to fd 2...
	I0915 11:54:59.762355    6887 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:54:59.762487    6887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:54:59.763693    6887 out.go:352] Setting JSON to false
	I0915 11:54:59.780293    6887 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5062,"bootTime":1726421437,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:54:59.780398    6887 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:54:59.786882    6887 out.go:177] * [old-k8s-version-634000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:54:59.791622    6887 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:54:59.791658    6887 notify.go:220] Checking for updates...
	I0915 11:54:59.799585    6887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:54:59.802561    6887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:54:59.805583    6887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:54:59.807067    6887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:54:59.810558    6887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:54:59.813969    6887 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:54:59.814036    6887 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:54:59.814080    6887 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:54:59.818529    6887 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:54:59.825596    6887 start.go:297] selected driver: qemu2
	I0915 11:54:59.825601    6887 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:54:59.825606    6887 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:54:59.827750    6887 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:54:59.830598    6887 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:54:59.833630    6887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:54:59.833646    6887 cni.go:84] Creating CNI manager for ""
	I0915 11:54:59.833671    6887 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 11:54:59.833697    6887 start.go:340] cluster config:
	{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:54:59.836974    6887 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:54:59.843567    6887 out.go:177] * Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	I0915 11:54:59.847580    6887 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 11:54:59.847592    6887 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 11:54:59.847597    6887 cache.go:56] Caching tarball of preloaded images
	I0915 11:54:59.847651    6887 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:54:59.847655    6887 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 11:54:59.847708    6887 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/old-k8s-version-634000/config.json ...
	I0915 11:54:59.847719    6887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/old-k8s-version-634000/config.json: {Name:mk75a06d7fcc059934cac95f9d598264b9febb9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:54:59.847918    6887 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:54:59.847953    6887 start.go:364] duration metric: took 25.583µs to acquireMachinesLock for "old-k8s-version-634000"
	I0915 11:54:59.847966    6887 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:54:59.848003    6887 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:54:59.855597    6887 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:54:59.870971    6887 start.go:159] libmachine.API.Create for "old-k8s-version-634000" (driver="qemu2")
	I0915 11:54:59.871024    6887 client.go:168] LocalClient.Create starting
	I0915 11:54:59.871105    6887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:54:59.871142    6887 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:59.871152    6887 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:59.871203    6887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:54:59.871226    6887 main.go:141] libmachine: Decoding PEM data...
	I0915 11:54:59.871233    6887 main.go:141] libmachine: Parsing certificate...
	I0915 11:54:59.871629    6887 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:00.030960    6887 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:00.092012    6887 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:00.092018    6887 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:00.092202    6887 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:00.101462    6887 main.go:141] libmachine: STDOUT: 
	I0915 11:55:00.101484    6887 main.go:141] libmachine: STDERR: 
	I0915 11:55:00.101545    6887 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2 +20000M
	I0915 11:55:00.109594    6887 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:00.109616    6887 main.go:141] libmachine: STDERR: 
	I0915 11:55:00.109630    6887 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:00.109638    6887 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:00.109650    6887 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:00.109682    6887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:1f:a9:dd:90:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:00.111307    6887 main.go:141] libmachine: STDOUT: 
	I0915 11:55:00.111321    6887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:00.111344    6887 client.go:171] duration metric: took 240.3165ms to LocalClient.Create
	I0915 11:55:02.113542    6887 start.go:128] duration metric: took 2.265520791s to createHost
	I0915 11:55:02.113670    6887 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 2.2657235s
	W0915 11:55:02.113735    6887 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:02.129211    6887 out.go:177] * Deleting "old-k8s-version-634000" in qemu2 ...
	W0915 11:55:02.164955    6887 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:02.164981    6887 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:07.167134    6887 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:07.167679    6887 start.go:364] duration metric: took 438.333µs to acquireMachinesLock for "old-k8s-version-634000"
	I0915 11:55:07.167774    6887 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:07.168059    6887 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:07.179769    6887 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:07.229676    6887 start.go:159] libmachine.API.Create for "old-k8s-version-634000" (driver="qemu2")
	I0915 11:55:07.229729    6887 client.go:168] LocalClient.Create starting
	I0915 11:55:07.229839    6887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:07.229912    6887 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:07.229930    6887 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:07.230001    6887 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:07.230048    6887 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:07.230059    6887 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:07.230782    6887 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:07.395594    6887 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:07.553138    6887 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:07.553149    6887 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:07.553371    6887 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:07.563415    6887 main.go:141] libmachine: STDOUT: 
	I0915 11:55:07.563436    6887 main.go:141] libmachine: STDERR: 
	I0915 11:55:07.563504    6887 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2 +20000M
	I0915 11:55:07.571871    6887 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:07.571888    6887 main.go:141] libmachine: STDERR: 
	I0915 11:55:07.571901    6887 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:07.571906    6887 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:07.571913    6887 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:07.571945    6887 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:3f:2d:0c:56:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:07.573780    6887 main.go:141] libmachine: STDOUT: 
	I0915 11:55:07.573795    6887 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:07.573810    6887 client.go:171] duration metric: took 344.078375ms to LocalClient.Create
	I0915 11:55:09.576104    6887 start.go:128] duration metric: took 2.408013666s to createHost
	I0915 11:55:09.576186    6887 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 2.408491625s
	W0915 11:55:09.576569    6887 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:09.595353    6887 out.go:201] 
	W0915 11:55:09.599373    6887 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:09.599400    6887 out.go:270] * 
	* 
	W0915 11:55:09.602080    6887 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:09.612316    6887 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (66.613375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml: exit status 1 (29.482958ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-634000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (30.203375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (30.119959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-634000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system: exit status 1 (26.852875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-634000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (31.085792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.181298167s)

                                                
                                                
-- stdout --
	* [old-k8s-version-634000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:13.439540    6949 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:13.439685    6949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:13.439689    6949 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:13.439691    6949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:13.439818    6949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:13.440852    6949 out.go:352] Setting JSON to false
	I0915 11:55:13.457346    6949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5076,"bootTime":1726421437,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:13.457441    6949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:13.462041    6949 out.go:177] * [old-k8s-version-634000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:13.469123    6949 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:13.469151    6949 notify.go:220] Checking for updates...
	I0915 11:55:13.476115    6949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:13.479106    6949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:13.482145    6949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:13.485080    6949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:13.488090    6949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:13.491432    6949 config.go:182] Loaded profile config "old-k8s-version-634000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0915 11:55:13.495054    6949 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0915 11:55:13.498099    6949 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:13.501994    6949 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:55:13.509096    6949 start.go:297] selected driver: qemu2
	I0915 11:55:13.509102    6949 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:13.509156    6949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:13.511410    6949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:13.511478    6949 cni.go:84] Creating CNI manager for ""
	I0915 11:55:13.511503    6949 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 11:55:13.511528    6949 start.go:340] cluster config:
	{Name:old-k8s-version-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-634000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:13.515027    6949 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:13.523097    6949 out.go:177] * Starting "old-k8s-version-634000" primary control-plane node in "old-k8s-version-634000" cluster
	I0915 11:55:13.527099    6949 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 11:55:13.527114    6949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 11:55:13.527129    6949 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:13.527192    6949 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:13.527197    6949 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 11:55:13.527261    6949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/old-k8s-version-634000/config.json ...
	I0915 11:55:13.527779    6949 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:13.527813    6949 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "old-k8s-version-634000"
	I0915 11:55:13.527821    6949 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:13.527826    6949 fix.go:54] fixHost starting: 
	I0915 11:55:13.527941    6949 fix.go:112] recreateIfNeeded on old-k8s-version-634000: state=Stopped err=<nil>
	W0915 11:55:13.527951    6949 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:13.532106    6949 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	I0915 11:55:13.540066    6949 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:13.540105    6949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:3f:2d:0c:56:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:13.542038    6949 main.go:141] libmachine: STDOUT: 
	I0915 11:55:13.542059    6949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:13.542088    6949 fix.go:56] duration metric: took 14.262625ms for fixHost
	I0915 11:55:13.542092    6949 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 14.275625ms
	W0915 11:55:13.542097    6949 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:13.542142    6949 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:13.542146    6949 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:18.544252    6949 start.go:360] acquireMachinesLock for old-k8s-version-634000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:18.544475    6949 start.go:364] duration metric: took 162.125µs to acquireMachinesLock for "old-k8s-version-634000"
	I0915 11:55:18.544508    6949 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:18.544514    6949 fix.go:54] fixHost starting: 
	I0915 11:55:18.544731    6949 fix.go:112] recreateIfNeeded on old-k8s-version-634000: state=Stopped err=<nil>
	W0915 11:55:18.544739    6949 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:18.550044    6949 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-634000" ...
	I0915 11:55:18.556967    6949 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:18.557036    6949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:3f:2d:0c:56:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/old-k8s-version-634000/disk.qcow2
	I0915 11:55:18.559830    6949 main.go:141] libmachine: STDOUT: 
	I0915 11:55:18.559860    6949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:18.559884    6949 fix.go:56] duration metric: took 15.369917ms for fixHost
	I0915 11:55:18.559891    6949 start.go:83] releasing machines lock for "old-k8s-version-634000", held for 15.399375ms
	W0915 11:55:18.559951    6949 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-634000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:18.566902    6949 out.go:201] 
	W0915 11:55:18.570876    6949 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:18.570886    6949 out.go:270] * 
	* 
	W0915 11:55:18.571645    6949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:18.583931    6949 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-634000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (39.692209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-634000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (29.81875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-634000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.04625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-634000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-634000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (30.375125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-634000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (31.13925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1: exit status 83 (41.27975ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-634000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-634000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:18.822028    6972 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:18.822863    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:18.822867    6972 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:18.822870    6972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:18.823005    6972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:18.823201    6972 out.go:352] Setting JSON to false
	I0915 11:55:18.823208    6972 mustload.go:65] Loading cluster: old-k8s-version-634000
	I0915 11:55:18.823435    6972 config.go:182] Loaded profile config "old-k8s-version-634000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0915 11:55:18.828231    6972 out.go:177] * The control-plane node old-k8s-version-634000 host is not running: state=Stopped
	I0915 11:55:18.831307    6972 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-634000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-634000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (29.8035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (29.8805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-634000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.946738833s)

                                                
                                                
-- stdout --
	* [no-preload-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-331000" primary control-plane node in "no-preload-331000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-331000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:19.141949    6989 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:19.142087    6989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:19.142090    6989 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:19.142092    6989 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:19.142223    6989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:19.143340    6989 out.go:352] Setting JSON to false
	I0915 11:55:19.160007    6989 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5082,"bootTime":1726421437,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:19.160083    6989 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:19.163951    6989 out.go:177] * [no-preload-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:19.171149    6989 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:19.171231    6989 notify.go:220] Checking for updates...
	I0915 11:55:19.178147    6989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:19.182141    6989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:19.185128    6989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:19.188110    6989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:19.191197    6989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:19.194430    6989 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:19.194498    6989 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:55:19.194543    6989 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:19.198144    6989 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:55:19.204005    6989 start.go:297] selected driver: qemu2
	I0915 11:55:19.204010    6989 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:55:19.204015    6989 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:19.206240    6989 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:55:19.209167    6989 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:55:19.212238    6989 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:19.212261    6989 cni.go:84] Creating CNI manager for ""
	I0915 11:55:19.212296    6989 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:19.212301    6989 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:55:19.212332    6989 start.go:340] cluster config:
	{Name:no-preload-331000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:19.215970    6989 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.223163    6989 out.go:177] * Starting "no-preload-331000" primary control-plane node in "no-preload-331000" cluster
	I0915 11:55:19.227138    6989 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:19.227230    6989 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/no-preload-331000/config.json ...
	I0915 11:55:19.227230    6989 cache.go:107] acquiring lock: {Name:mk568a49d65ae4b140550bf78b81ecfdc4d0bc00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227237    6989 cache.go:107] acquiring lock: {Name:mk245377517910de1d63326d274ed2f98f105eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227247    6989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/no-preload-331000/config.json: {Name:mk3b6d4e6d3a4b84b39c709244cfd4511a967973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:55:19.227254    6989 cache.go:107] acquiring lock: {Name:mk3844d62ecd0b055fdf7e95bb9e145ccbdf21ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227307    6989 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0915 11:55:19.227314    6989 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.708µs
	I0915 11:55:19.227321    6989 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0915 11:55:19.227329    6989 cache.go:107] acquiring lock: {Name:mk34b6c5cb155c1b3220735f644f82379ec1af23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227316    6989 cache.go:107] acquiring lock: {Name:mkde9d4240522cfce8da5b3a6a9ee295fb375ead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227365    6989 cache.go:107] acquiring lock: {Name:mk24f289f0b9e5ea34b0306e0f5e88c36815d0ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227458    6989 cache.go:107] acquiring lock: {Name:mk18bdab2394a6f8f960d835cb0a44c9f6149de9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227471    6989 cache.go:107] acquiring lock: {Name:mk40becf3d023b3f39047f9767b6c025a3b26510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:19.227550    6989 start.go:360] acquireMachinesLock for no-preload-331000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:19.227599    6989 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 11:55:19.227599    6989 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0915 11:55:19.227604    6989 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0915 11:55:19.227646    6989 start.go:364] duration metric: took 89.958µs to acquireMachinesLock for "no-preload-331000"
	I0915 11:55:19.227700    6989 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0915 11:55:19.227701    6989 start.go:93] Provisioning new machine with config: &{Name:no-preload-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:19.227728    6989 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:19.227794    6989 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0915 11:55:19.227868    6989 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0915 11:55:19.227978    6989 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0915 11:55:19.236099    6989 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:19.240496    6989 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0915 11:55:19.240659    6989 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0915 11:55:19.240777    6989 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 11:55:19.243226    6989 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0915 11:55:19.243301    6989 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0915 11:55:19.243406    6989 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0915 11:55:19.243412    6989 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0915 11:55:19.253193    6989 start.go:159] libmachine.API.Create for "no-preload-331000" (driver="qemu2")
	I0915 11:55:19.253217    6989 client.go:168] LocalClient.Create starting
	I0915 11:55:19.253294    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:19.253323    6989 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:19.253330    6989 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:19.253371    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:19.253394    6989 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:19.253403    6989 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:19.253732    6989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:19.417213    6989 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:19.545722    6989 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:19.545754    6989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:19.545956    6989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:19.555357    6989 main.go:141] libmachine: STDOUT: 
	I0915 11:55:19.555374    6989 main.go:141] libmachine: STDERR: 
	I0915 11:55:19.555438    6989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2 +20000M
	I0915 11:55:19.563726    6989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:19.563741    6989 main.go:141] libmachine: STDERR: 
	I0915 11:55:19.563755    6989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:19.563760    6989 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:19.563772    6989 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:19.563795    6989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:cc:31:05:a3:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:19.565586    6989 main.go:141] libmachine: STDOUT: 
	I0915 11:55:19.565607    6989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:19.565629    6989 client.go:171] duration metric: took 312.410125ms to LocalClient.Create
	I0915 11:55:19.647633    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0915 11:55:19.671309    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0915 11:55:19.678984    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0915 11:55:19.687627    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0915 11:55:19.689923    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0915 11:55:19.719729    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0915 11:55:19.740778    6989 cache.go:162] opening:  /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0915 11:55:19.839493    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0915 11:55:19.839508    6989 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 612.224041ms
	I0915 11:55:19.839516    6989 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0915 11:55:21.565766    6989 start.go:128] duration metric: took 2.338040459s to createHost
	I0915 11:55:21.565798    6989 start.go:83] releasing machines lock for "no-preload-331000", held for 2.338164042s
	W0915 11:55:21.565821    6989 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:21.583035    6989 out.go:177] * Deleting "no-preload-331000" in qemu2 ...
	W0915 11:55:21.602935    6989 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:21.602943    6989 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:22.967302    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0915 11:55:22.967316    6989 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.74011925s
	I0915 11:55:22.967322    6989 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0915 11:55:23.083554    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0915 11:55:23.083575    6989 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 3.856263375s
	I0915 11:55:23.083584    6989 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0915 11:55:23.696754    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0915 11:55:23.696793    6989 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.46943725s
	I0915 11:55:23.696812    6989 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0915 11:55:23.899670    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0915 11:55:23.899696    6989 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.672497666s
	I0915 11:55:23.899753    6989 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0915 11:55:25.283608    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0915 11:55:25.283666    6989 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 6.056377041s
	I0915 11:55:25.283690    6989 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0915 11:55:26.603123    6989 start.go:360] acquireMachinesLock for no-preload-331000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:26.603597    6989 start.go:364] duration metric: took 403.708µs to acquireMachinesLock for "no-preload-331000"
	I0915 11:55:26.603718    6989 start.go:93] Provisioning new machine with config: &{Name:no-preload-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:26.604010    6989 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:26.614654    6989 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:26.666081    6989 start.go:159] libmachine.API.Create for "no-preload-331000" (driver="qemu2")
	I0915 11:55:26.666156    6989 client.go:168] LocalClient.Create starting
	I0915 11:55:26.666294    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:26.666365    6989 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:26.666389    6989 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:26.666458    6989 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:26.666505    6989 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:26.666522    6989 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:26.667033    6989 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:26.834604    6989 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:26.993666    6989 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:26.993674    6989 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:26.993887    6989 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:27.003583    6989 main.go:141] libmachine: STDOUT: 
	I0915 11:55:27.003609    6989 main.go:141] libmachine: STDERR: 
	I0915 11:55:27.003691    6989 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2 +20000M
	I0915 11:55:27.011874    6989 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:27.011906    6989 main.go:141] libmachine: STDERR: 
	I0915 11:55:27.011925    6989 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:27.011930    6989 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:27.011940    6989 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:27.011982    6989 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0b:7d:e2:f3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:27.013985    6989 main.go:141] libmachine: STDOUT: 
	I0915 11:55:27.014011    6989 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:27.014025    6989 client.go:171] duration metric: took 347.862708ms to LocalClient.Create
	I0915 11:55:27.967202    6989 cache.go:157] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0915 11:55:27.967233    6989 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.739964084s
	I0915 11:55:27.967241    6989 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0915 11:55:27.967259    6989 cache.go:87] Successfully saved all images to host disk.
	I0915 11:55:29.016217    6989 start.go:128] duration metric: took 2.412189708s to createHost
	I0915 11:55:29.016309    6989 start.go:83] releasing machines lock for "no-preload-331000", held for 2.412706125s
	W0915 11:55:29.016601    6989 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-331000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:29.027471    6989 out.go:201] 
	W0915 11:55:29.035683    6989 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:29.035725    6989 out.go:270] * 
	* 
	W0915 11:55:29.037668    6989 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:29.046255    6989 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (59.967917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-331000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-331000 create -f testdata/busybox.yaml: exit status 1 (29.568375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-331000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-331000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (29.942375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (29.19225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-331000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-331000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-331000 describe deploy/metrics-server -n kube-system: exit status 1 (27.349ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-331000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-331000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (30.875083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.184279833s)

                                                
                                                
-- stdout --
	* [no-preload-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-331000" primary control-plane node in "no-preload-331000" cluster
	* Restarting existing qemu2 VM for "no-preload-331000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-331000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:32.774805    7082 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:32.774936    7082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:32.774939    7082 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:32.774941    7082 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:32.775058    7082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:32.776142    7082 out.go:352] Setting JSON to false
	I0915 11:55:32.792597    7082 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5095,"bootTime":1726421437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:32.792669    7082 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:32.797652    7082 out.go:177] * [no-preload-331000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:32.804644    7082 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:32.804755    7082 notify.go:220] Checking for updates...
	I0915 11:55:32.812474    7082 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:32.815627    7082 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:32.818629    7082 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:32.821705    7082 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:32.824699    7082 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:32.827888    7082 config.go:182] Loaded profile config "no-preload-331000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:32.828203    7082 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:32.832592    7082 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:55:32.839653    7082 start.go:297] selected driver: qemu2
	I0915 11:55:32.839660    7082 start.go:901] validating driver "qemu2" against &{Name:no-preload-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-331000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:32.839710    7082 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:32.842137    7082 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:32.842168    7082 cni.go:84] Creating CNI manager for ""
	I0915 11:55:32.842187    7082 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:32.842224    7082 start.go:340] cluster config:
	{Name:no-preload-331000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-331000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:32.845618    7082 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.852597    7082 out.go:177] * Starting "no-preload-331000" primary control-plane node in "no-preload-331000" cluster
	I0915 11:55:32.856661    7082 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:32.856743    7082 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/no-preload-331000/config.json ...
	I0915 11:55:32.856767    7082 cache.go:107] acquiring lock: {Name:mk245377517910de1d63326d274ed2f98f105eae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856791    7082 cache.go:107] acquiring lock: {Name:mk3844d62ecd0b055fdf7e95bb9e145ccbdf21ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856831    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0915 11:55:32.856836    7082 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.792µs
	I0915 11:55:32.856840    7082 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0915 11:55:32.856846    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0915 11:55:32.856853    7082 cache.go:107] acquiring lock: {Name:mk24f289f0b9e5ea34b0306e0f5e88c36815d0ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856854    7082 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 89.25µs
	I0915 11:55:32.856861    7082 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0915 11:55:32.856846    7082 cache.go:107] acquiring lock: {Name:mkde9d4240522cfce8da5b3a6a9ee295fb375ead Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856911    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0915 11:55:32.856914    7082 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 68.75µs
	I0915 11:55:32.856917    7082 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0915 11:55:32.856894    7082 cache.go:107] acquiring lock: {Name:mk34b6c5cb155c1b3220735f644f82379ec1af23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856923    7082 cache.go:107] acquiring lock: {Name:mk18bdab2394a6f8f960d835cb0a44c9f6149de9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856918    7082 cache.go:107] acquiring lock: {Name:mk568a49d65ae4b140550bf78b81ecfdc4d0bc00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.856910    7082 cache.go:107] acquiring lock: {Name:mk40becf3d023b3f39047f9767b6c025a3b26510 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:32.857002    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0915 11:55:32.857010    7082 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 141.459µs
	I0915 11:55:32.857013    7082 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0915 11:55:32.857013    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0915 11:55:32.857038    7082 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 103.459µs
	I0915 11:55:32.857043    7082 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0915 11:55:32.857048    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0915 11:55:32.857054    7082 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 160.417µs
	I0915 11:55:32.857058    7082 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0915 11:55:32.857115    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0915 11:55:32.857118    7082 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 265.25µs
	I0915 11:55:32.857121    7082 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0915 11:55:32.857115    7082 cache.go:115] /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0915 11:55:32.857127    7082 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 223.458µs
	I0915 11:55:32.857129    7082 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0915 11:55:32.857131    7082 cache.go:87] Successfully saved all images to host disk.
	I0915 11:55:32.857164    7082 start.go:360] acquireMachinesLock for no-preload-331000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:32.857195    7082 start.go:364] duration metric: took 26.041µs to acquireMachinesLock for "no-preload-331000"
	I0915 11:55:32.857204    7082 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:32.857208    7082 fix.go:54] fixHost starting: 
	I0915 11:55:32.857318    7082 fix.go:112] recreateIfNeeded on no-preload-331000: state=Stopped err=<nil>
	W0915 11:55:32.857326    7082 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:32.865623    7082 out.go:177] * Restarting existing qemu2 VM for "no-preload-331000" ...
	I0915 11:55:32.869584    7082 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:32.869615    7082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0b:7d:e2:f3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:32.871362    7082 main.go:141] libmachine: STDOUT: 
	I0915 11:55:32.871383    7082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:32.871412    7082 fix.go:56] duration metric: took 14.204042ms for fixHost
	I0915 11:55:32.871417    7082 start.go:83] releasing machines lock for "no-preload-331000", held for 14.217459ms
	W0915 11:55:32.871422    7082 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:32.871465    7082 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:32.871470    7082 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:37.872300    7082 start.go:360] acquireMachinesLock for no-preload-331000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:37.872713    7082 start.go:364] duration metric: took 283.167µs to acquireMachinesLock for "no-preload-331000"
	I0915 11:55:37.872845    7082 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:37.872858    7082 fix.go:54] fixHost starting: 
	I0915 11:55:37.873421    7082 fix.go:112] recreateIfNeeded on no-preload-331000: state=Stopped err=<nil>
	W0915 11:55:37.873439    7082 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:37.880721    7082 out.go:177] * Restarting existing qemu2 VM for "no-preload-331000" ...
	I0915 11:55:37.884709    7082 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:37.884876    7082 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:0b:7d:e2:f3:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/no-preload-331000/disk.qcow2
	I0915 11:55:37.892027    7082 main.go:141] libmachine: STDOUT: 
	I0915 11:55:37.892071    7082 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:37.892156    7082 fix.go:56] duration metric: took 19.296333ms for fixHost
	I0915 11:55:37.892355    7082 start.go:83] releasing machines lock for "no-preload-331000", held for 19.609667ms
	W0915 11:55:37.892527    7082 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-331000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-331000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:37.900784    7082 out.go:201] 
	W0915 11:55:37.903862    7082 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:37.903876    7082 out.go:270] * 
	* 
	W0915 11:55:37.905173    7082 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:37.920760    7082 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-331000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (51.799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-331000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (31.739167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-331000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.593166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-331000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-331000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (30.547584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-331000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (29.767666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-331000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-331000 --alsologtostderr -v=1: exit status 83 (41.981792ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-331000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-331000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:38.173692    7104 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:38.173859    7104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:38.173862    7104 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:38.173864    7104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:38.174000    7104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:38.174258    7104 out.go:352] Setting JSON to false
	I0915 11:55:38.174265    7104 mustload.go:65] Loading cluster: no-preload-331000
	I0915 11:55:38.174493    7104 config.go:182] Loaded profile config "no-preload-331000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:38.179053    7104 out.go:177] * The control-plane node no-preload-331000 host is not running: state=Stopped
	I0915 11:55:38.184022    7104 out.go:177]   To start a cluster, run: "minikube start -p no-preload-331000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-331000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (29.735041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (30.084334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-331000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.068505167s)

                                                
                                                
-- stdout --
	* [embed-certs-526000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-526000" primary control-plane node in "embed-certs-526000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-526000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:38.492186    7121 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:38.492336    7121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:38.492344    7121 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:38.492346    7121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:38.492471    7121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:38.493613    7121 out.go:352] Setting JSON to false
	I0915 11:55:38.510144    7121 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5101,"bootTime":1726421437,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:38.510214    7121 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:38.514456    7121 out.go:177] * [embed-certs-526000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:38.521364    7121 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:38.521420    7121 notify.go:220] Checking for updates...
	I0915 11:55:38.528352    7121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:38.531351    7121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:38.532817    7121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:38.536365    7121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:38.539356    7121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:38.542712    7121 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:38.542770    7121 config.go:182] Loaded profile config "stopped-upgrade-515000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0915 11:55:38.542821    7121 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:38.547312    7121 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:55:38.554361    7121 start.go:297] selected driver: qemu2
	I0915 11:55:38.554367    7121 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:55:38.554379    7121 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:38.556727    7121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:55:38.559419    7121 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:55:38.562451    7121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:38.562468    7121 cni.go:84] Creating CNI manager for ""
	I0915 11:55:38.562488    7121 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:38.562494    7121 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:55:38.562524    7121 start.go:340] cluster config:
	{Name:embed-certs-526000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-526000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:38.566388    7121 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:38.574369    7121 out.go:177] * Starting "embed-certs-526000" primary control-plane node in "embed-certs-526000" cluster
	I0915 11:55:38.578346    7121 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:38.578359    7121 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:55:38.578367    7121 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:38.578425    7121 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:38.578431    7121 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:55:38.578490    7121 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/embed-certs-526000/config.json ...
	I0915 11:55:38.578501    7121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/embed-certs-526000/config.json: {Name:mk1576dd604b01520a737b61612b4965921c920a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:55:38.578712    7121 start.go:360] acquireMachinesLock for embed-certs-526000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:38.578746    7121 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "embed-certs-526000"
	I0915 11:55:38.578758    7121 start.go:93] Provisioning new machine with config: &{Name:embed-certs-526000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-526000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:38.578784    7121 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:38.586370    7121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:38.603470    7121 start.go:159] libmachine.API.Create for "embed-certs-526000" (driver="qemu2")
	I0915 11:55:38.603501    7121 client.go:168] LocalClient.Create starting
	I0915 11:55:38.603566    7121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:38.603597    7121 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:38.603606    7121 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:38.603638    7121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:38.603661    7121 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:38.603670    7121 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:38.604028    7121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:38.763553    7121 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:39.114287    7121 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:39.114297    7121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:39.114491    7121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:39.134468    7121 main.go:141] libmachine: STDOUT: 
	I0915 11:55:39.134489    7121 main.go:141] libmachine: STDERR: 
	I0915 11:55:39.134555    7121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2 +20000M
	I0915 11:55:39.150694    7121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:39.150711    7121 main.go:141] libmachine: STDERR: 
	I0915 11:55:39.150730    7121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:39.150734    7121 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:39.150747    7121 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:39.150784    7121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:36:32:0e:b7:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:39.152525    7121 main.go:141] libmachine: STDOUT: 
	I0915 11:55:39.152539    7121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:39.152557    7121 client.go:171] duration metric: took 549.055708ms to LocalClient.Create
	I0915 11:55:41.154765    7121 start.go:128] duration metric: took 2.575974375s to createHost
	I0915 11:55:41.154835    7121 start.go:83] releasing machines lock for "embed-certs-526000", held for 2.576099042s
	W0915 11:55:41.154889    7121 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:41.174745    7121 out.go:177] * Deleting "embed-certs-526000" in qemu2 ...
	W0915 11:55:41.199068    7121 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:41.199092    7121 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:46.201318    7121 start.go:360] acquireMachinesLock for embed-certs-526000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:46.201773    7121 start.go:364] duration metric: took 359.959µs to acquireMachinesLock for "embed-certs-526000"
	I0915 11:55:46.201931    7121 start.go:93] Provisioning new machine with config: &{Name:embed-certs-526000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-526000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:46.202240    7121 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:46.208903    7121 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:46.259877    7121 start.go:159] libmachine.API.Create for "embed-certs-526000" (driver="qemu2")
	I0915 11:55:46.259963    7121 client.go:168] LocalClient.Create starting
	I0915 11:55:46.260073    7121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:46.260142    7121 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:46.260157    7121 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:46.260212    7121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:46.260256    7121 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:46.260270    7121 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:46.260909    7121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:46.431288    7121 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:46.461874    7121 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:46.461885    7121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:46.462046    7121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:46.471444    7121 main.go:141] libmachine: STDOUT: 
	I0915 11:55:46.471467    7121 main.go:141] libmachine: STDERR: 
	I0915 11:55:46.471526    7121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2 +20000M
	I0915 11:55:46.479330    7121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:46.479344    7121 main.go:141] libmachine: STDERR: 
	I0915 11:55:46.479360    7121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:46.479368    7121 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:46.479377    7121 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:46.479417    7121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:06:2a:8f:c8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:46.481105    7121 main.go:141] libmachine: STDOUT: 
	I0915 11:55:46.481119    7121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:46.481134    7121 client.go:171] duration metric: took 221.166791ms to LocalClient.Create
	I0915 11:55:48.481574    7121 start.go:128] duration metric: took 2.279318791s to createHost
	I0915 11:55:48.481638    7121 start.go:83] releasing machines lock for "embed-certs-526000", held for 2.279853792s
	W0915 11:55:48.482071    7121 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-526000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-526000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:48.496708    7121 out.go:201] 
	W0915 11:55:48.499851    7121 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:48.499876    7121 out.go:270] * 
	* 
	W0915 11:55:48.502522    7121 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:48.517638    7121 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (62.979042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (12.015491917s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-294000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:39.099083    7138 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:39.099215    7138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:39.099218    7138 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:39.099220    7138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:39.099359    7138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:39.100459    7138 out.go:352] Setting JSON to false
	I0915 11:55:39.117156    7138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5102,"bootTime":1726421437,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:39.117228    7138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:39.121379    7138 out.go:177] * [default-k8s-diff-port-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:39.129472    7138 notify.go:220] Checking for updates...
	I0915 11:55:39.133370    7138 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:39.140382    7138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:39.150328    7138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:39.153386    7138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:39.154631    7138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:39.157328    7138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:39.160755    7138 config.go:182] Loaded profile config "embed-certs-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:39.160812    7138 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:39.160858    7138 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:39.165234    7138 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:55:39.172340    7138 start.go:297] selected driver: qemu2
	I0915 11:55:39.172345    7138 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:55:39.172350    7138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:39.174402    7138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 11:55:39.177432    7138 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:55:39.180422    7138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:39.180451    7138 cni.go:84] Creating CNI manager for ""
	I0915 11:55:39.180479    7138 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:39.180486    7138 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:55:39.180515    7138 start.go:340] cluster config:
	{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:39.184060    7138 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:39.195368    7138 out.go:177] * Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	I0915 11:55:39.199328    7138 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:39.199343    7138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:55:39.199354    7138 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:39.199424    7138 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:39.199430    7138 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:55:39.199503    7138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/default-k8s-diff-port-294000/config.json ...
	I0915 11:55:39.199514    7138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/default-k8s-diff-port-294000/config.json: {Name:mkaa0d67cc5a239e9f7f57b885ee0a78d1c37c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:55:39.200024    7138 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:41.154986    7138 start.go:364] duration metric: took 1.954922458s to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0915 11:55:41.155103    7138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:41.155293    7138 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:41.164781    7138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:41.214734    7138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-294000" (driver="qemu2")
	I0915 11:55:41.214780    7138 client.go:168] LocalClient.Create starting
	I0915 11:55:41.214905    7138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:41.214972    7138 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:41.214988    7138 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:41.215066    7138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:41.215111    7138 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:41.215126    7138 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:41.215912    7138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:41.384580    7138 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:41.515004    7138 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:41.515011    7138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:41.515196    7138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:41.524747    7138 main.go:141] libmachine: STDOUT: 
	I0915 11:55:41.524761    7138 main.go:141] libmachine: STDERR: 
	I0915 11:55:41.524815    7138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2 +20000M
	I0915 11:55:41.532721    7138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:41.532739    7138 main.go:141] libmachine: STDERR: 
	I0915 11:55:41.532758    7138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:41.532764    7138 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:41.532777    7138 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:41.532802    7138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:ca:91:bb:ed:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:41.534509    7138 main.go:141] libmachine: STDOUT: 
	I0915 11:55:41.534524    7138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:41.534548    7138 client.go:171] duration metric: took 319.763958ms to LocalClient.Create
	I0915 11:55:43.536709    7138 start.go:128] duration metric: took 2.381403s to createHost
	I0915 11:55:43.536779    7138 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 2.381760167s
	W0915 11:55:43.536832    7138 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:43.560962    7138 out.go:177] * Deleting "default-k8s-diff-port-294000" in qemu2 ...
	W0915 11:55:43.598928    7138 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:43.598955    7138 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:48.601004    7138 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:48.601071    7138 start.go:364] duration metric: took 50.834µs to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0915 11:55:48.601087    7138 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:48.601140    7138 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:48.608005    7138 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:48.625134    7138 start.go:159] libmachine.API.Create for "default-k8s-diff-port-294000" (driver="qemu2")
	I0915 11:55:48.625177    7138 client.go:168] LocalClient.Create starting
	I0915 11:55:48.625252    7138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:48.625280    7138 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:48.625288    7138 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:48.625323    7138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:48.625339    7138 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:48.625348    7138 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:48.625640    7138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:48.835332    7138 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:49.021661    7138 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:49.021671    7138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:49.021861    7138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:49.031428    7138 main.go:141] libmachine: STDOUT: 
	I0915 11:55:49.031450    7138 main.go:141] libmachine: STDERR: 
	I0915 11:55:49.031515    7138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2 +20000M
	I0915 11:55:49.039345    7138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:49.039361    7138 main.go:141] libmachine: STDERR: 
	I0915 11:55:49.039377    7138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:49.039384    7138 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:49.039394    7138 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:49.039423    7138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d0:73:2f:23:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:49.041105    7138 main.go:141] libmachine: STDOUT: 
	I0915 11:55:49.041120    7138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:49.041137    7138 client.go:171] duration metric: took 415.9595ms to LocalClient.Create
	I0915 11:55:51.043525    7138 start.go:128] duration metric: took 2.442318125s to createHost
	I0915 11:55:51.043646    7138 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 2.442581958s
	W0915 11:55:51.044064    7138 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:51.049768    7138 out.go:201] 
	W0915 11:55:51.055826    7138 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:51.055901    7138 out.go:270] * 
	* 
	W0915 11:55:51.058324    7138 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:51.070829    7138 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (64.742792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-526000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-526000 create -f testdata/busybox.yaml: exit status 1 (29.712458ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-526000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-526000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (33.510875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (33.231083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-526000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-526000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-526000 describe deploy/metrics-server -n kube-system: exit status 1 (29.178ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-526000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-526000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (33.522792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml: exit status 1 (29.169167ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-294000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.442209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.195292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-294000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system: exit status 1 (26.731833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-294000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.142833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.179357834s)

                                                
                                                
-- stdout --
	* [embed-certs-526000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-526000" primary control-plane node in "embed-certs-526000" cluster
	* Restarting existing qemu2 VM for "embed-certs-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-526000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:52.037652    7223 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:52.037773    7223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:52.037777    7223 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:52.037779    7223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:52.037901    7223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:52.039007    7223 out.go:352] Setting JSON to false
	I0915 11:55:52.055445    7223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5115,"bootTime":1726421437,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:52.055532    7223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:52.058804    7223 out.go:177] * [embed-certs-526000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:52.064946    7223 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:52.065025    7223 notify.go:220] Checking for updates...
	I0915 11:55:52.071862    7223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:52.078845    7223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:52.081886    7223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:52.084859    7223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:52.087771    7223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:52.091123    7223 config.go:182] Loaded profile config "embed-certs-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:52.091381    7223 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:52.095835    7223 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:55:52.102933    7223 start.go:297] selected driver: qemu2
	I0915 11:55:52.102938    7223 start.go:901] validating driver "qemu2" against &{Name:embed-certs-526000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-526000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:52.102992    7223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:52.105405    7223 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:52.105434    7223 cni.go:84] Creating CNI manager for ""
	I0915 11:55:52.105453    7223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:52.105484    7223 start.go:340] cluster config:
	{Name:embed-certs-526000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-526000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:52.109071    7223 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:52.115865    7223 out.go:177] * Starting "embed-certs-526000" primary control-plane node in "embed-certs-526000" cluster
	I0915 11:55:52.119894    7223 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:52.119910    7223 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:55:52.119922    7223 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:52.119998    7223 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:52.120004    7223 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:55:52.120067    7223 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/embed-certs-526000/config.json ...
	I0915 11:55:52.120584    7223 start.go:360] acquireMachinesLock for embed-certs-526000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:52.120620    7223 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "embed-certs-526000"
	I0915 11:55:52.120629    7223 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:52.120634    7223 fix.go:54] fixHost starting: 
	I0915 11:55:52.120753    7223 fix.go:112] recreateIfNeeded on embed-certs-526000: state=Stopped err=<nil>
	W0915 11:55:52.120762    7223 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:52.124858    7223 out.go:177] * Restarting existing qemu2 VM for "embed-certs-526000" ...
	I0915 11:55:52.128956    7223 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:52.129007    7223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:06:2a:8f:c8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:52.131040    7223 main.go:141] libmachine: STDOUT: 
	I0915 11:55:52.131059    7223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:52.131093    7223 fix.go:56] duration metric: took 10.457833ms for fixHost
	I0915 11:55:52.131097    7223 start.go:83] releasing machines lock for "embed-certs-526000", held for 10.472833ms
	W0915 11:55:52.131103    7223 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:52.131141    7223 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:52.131146    7223 start.go:729] Will try again in 5 seconds ...
	I0915 11:55:57.133333    7223 start.go:360] acquireMachinesLock for embed-certs-526000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:57.133712    7223 start.go:364] duration metric: took 304.666µs to acquireMachinesLock for "embed-certs-526000"
	I0915 11:55:57.133842    7223 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:57.133862    7223 fix.go:54] fixHost starting: 
	I0915 11:55:57.134601    7223 fix.go:112] recreateIfNeeded on embed-certs-526000: state=Stopped err=<nil>
	W0915 11:55:57.134628    7223 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:57.140113    7223 out.go:177] * Restarting existing qemu2 VM for "embed-certs-526000" ...
	I0915 11:55:57.144087    7223 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:57.144251    7223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:06:2a:8f:c8:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/embed-certs-526000/disk.qcow2
	I0915 11:55:57.153253    7223 main.go:141] libmachine: STDOUT: 
	I0915 11:55:57.153306    7223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:57.153382    7223 fix.go:56] duration metric: took 19.523666ms for fixHost
	I0915 11:55:57.153395    7223 start.go:83] releasing machines lock for "embed-certs-526000", held for 19.663666ms
	W0915 11:55:57.153577    7223 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-526000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-526000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:57.162107    7223 out.go:201] 
	W0915 11:55:57.166179    7223 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:57.166212    7223 out.go:270] * 
	* 
	W0915 11:55:57.168734    7223 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:55:57.176078    7223 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-526000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (68.350792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.38887525s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:54.908666    7249 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:54.908803    7249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:54.908807    7249 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:54.908809    7249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:54.908919    7249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:54.909934    7249 out.go:352] Setting JSON to false
	I0915 11:55:54.926166    7249 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5117,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:54.926232    7249 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:54.931593    7249 out.go:177] * [default-k8s-diff-port-294000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:54.934759    7249 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:54.934821    7249 notify.go:220] Checking for updates...
	I0915 11:55:54.942564    7249 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:54.945576    7249 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:54.948572    7249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:54.951624    7249 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:54.954567    7249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:54.957803    7249 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:54.958061    7249 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:54.962529    7249 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:55:54.968555    7249 start.go:297] selected driver: qemu2
	I0915 11:55:54.968562    7249 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-294000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:54.968625    7249 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:54.971061    7249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 11:55:54.971096    7249 cni.go:84] Creating CNI manager for ""
	I0915 11:55:54.971116    7249 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:54.971147    7249 start.go:340] cluster config:
	{Name:default-k8s-diff-port-294000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-294000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:54.974812    7249 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:54.982589    7249 out.go:177] * Starting "default-k8s-diff-port-294000" primary control-plane node in "default-k8s-diff-port-294000" cluster
	I0915 11:55:54.987604    7249 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:54.987625    7249 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:55:54.987636    7249 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:54.987696    7249 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:54.987702    7249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:55:54.987845    7249 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/default-k8s-diff-port-294000/config.json ...
	I0915 11:55:54.988400    7249 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:54.988431    7249 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0915 11:55:54.988440    7249 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:55:54.988445    7249 fix.go:54] fixHost starting: 
	I0915 11:55:54.988571    7249 fix.go:112] recreateIfNeeded on default-k8s-diff-port-294000: state=Stopped err=<nil>
	W0915 11:55:54.988579    7249 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:55:54.992437    7249 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	I0915 11:55:55.000556    7249 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:55.000593    7249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d0:73:2f:23:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:55:55.002669    7249 main.go:141] libmachine: STDOUT: 
	I0915 11:55:55.002691    7249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:55.002719    7249 fix.go:56] duration metric: took 14.273042ms for fixHost
	I0915 11:55:55.002724    7249 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 14.288542ms
	W0915 11:55:55.002729    7249 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:55:55.002765    7249 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:55:55.002770    7249 start.go:729] Will try again in 5 seconds ...
	I0915 11:56:00.004933    7249 start.go:360] acquireMachinesLock for default-k8s-diff-port-294000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:56:00.195774    7249 start.go:364] duration metric: took 190.709459ms to acquireMachinesLock for "default-k8s-diff-port-294000"
	I0915 11:56:00.195866    7249 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:56:00.195886    7249 fix.go:54] fixHost starting: 
	I0915 11:56:00.196646    7249 fix.go:112] recreateIfNeeded on default-k8s-diff-port-294000: state=Stopped err=<nil>
	W0915 11:56:00.196676    7249 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:56:00.211040    7249 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-294000" ...
	I0915 11:56:00.222033    7249 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:56:00.222218    7249 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:d0:73:2f:23:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/default-k8s-diff-port-294000/disk.qcow2
	I0915 11:56:00.232242    7249 main.go:141] libmachine: STDOUT: 
	I0915 11:56:00.232328    7249 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:56:00.232418    7249 fix.go:56] duration metric: took 36.529208ms for fixHost
	I0915 11:56:00.232442    7249 start.go:83] releasing machines lock for "default-k8s-diff-port-294000", held for 36.627333ms
	W0915 11:56:00.232706    7249 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-294000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:00.241065    7249 out.go:201] 
	W0915 11:56:00.244173    7249 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:56:00.244232    7249 out.go:270] * 
	* 
	W0915 11:56:00.246480    7249 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:56:00.256085    7249 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-294000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (59.104042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-526000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (32.099542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-526000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-526000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-526000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.713375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-526000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-526000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (29.726834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-526000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (29.618833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-526000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-526000 --alsologtostderr -v=1: exit status 83 (39.9325ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-526000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-526000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:57.447081    7268 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:57.447249    7268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:57.447252    7268 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:57.447254    7268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:57.447377    7268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:57.447582    7268 out.go:352] Setting JSON to false
	I0915 11:55:57.447590    7268 mustload.go:65] Loading cluster: embed-certs-526000
	I0915 11:55:57.447827    7268 config.go:182] Loaded profile config "embed-certs-526000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:57.452069    7268 out.go:177] * The control-plane node embed-certs-526000 host is not running: state=Stopped
	I0915 11:55:57.456079    7268 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-526000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-526000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (29.577875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (29.590583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-526000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.077024334s)

                                                
                                                
-- stdout --
	* [newest-cni-221000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-221000" primary control-plane node in "newest-cni-221000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-221000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:55:57.763426    7285 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:55:57.763549    7285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:57.763552    7285 out.go:358] Setting ErrFile to fd 2...
	I0915 11:55:57.763554    7285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:55:57.763698    7285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:55:57.764792    7285 out.go:352] Setting JSON to false
	I0915 11:55:57.781016    7285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5120,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:55:57.781084    7285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:55:57.786110    7285 out.go:177] * [newest-cni-221000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:55:57.793127    7285 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:55:57.793173    7285 notify.go:220] Checking for updates...
	I0915 11:55:57.800054    7285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:55:57.803070    7285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:55:57.811101    7285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:55:57.814101    7285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:55:57.817071    7285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:55:57.820399    7285 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:57.820479    7285 config.go:182] Loaded profile config "multinode-715000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:55:57.820525    7285 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:55:57.825059    7285 out.go:177] * Using the qemu2 driver based on user configuration
	I0915 11:55:57.832057    7285 start.go:297] selected driver: qemu2
	I0915 11:55:57.832063    7285 start.go:901] validating driver "qemu2" against <nil>
	I0915 11:55:57.832069    7285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:55:57.834519    7285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0915 11:55:57.834564    7285 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0915 11:55:57.842061    7285 out.go:177] * Automatically selected the socket_vmnet network
	I0915 11:55:57.845148    7285 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0915 11:55:57.845169    7285 cni.go:84] Creating CNI manager for ""
	I0915 11:55:57.845211    7285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:55:57.845220    7285 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 11:55:57.845244    7285 start.go:340] cluster config:
	{Name:newest-cni-221000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:55:57.849218    7285 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:55:57.856051    7285 out.go:177] * Starting "newest-cni-221000" primary control-plane node in "newest-cni-221000" cluster
	I0915 11:55:57.860040    7285 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:55:57.860057    7285 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:55:57.860065    7285 cache.go:56] Caching tarball of preloaded images
	I0915 11:55:57.860137    7285 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:55:57.860145    7285 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:55:57.860207    7285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/newest-cni-221000/config.json ...
	I0915 11:55:57.860218    7285 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/newest-cni-221000/config.json: {Name:mk2328837061f312089b48f74568f58ccc78e0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 11:55:57.860650    7285 start.go:360] acquireMachinesLock for newest-cni-221000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:55:57.860688    7285 start.go:364] duration metric: took 31.417µs to acquireMachinesLock for "newest-cni-221000"
	I0915 11:55:57.860700    7285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:55:57.860745    7285 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:55:57.869044    7285 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:55:57.888018    7285 start.go:159] libmachine.API.Create for "newest-cni-221000" (driver="qemu2")
	I0915 11:55:57.888046    7285 client.go:168] LocalClient.Create starting
	I0915 11:55:57.888123    7285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:55:57.888159    7285 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:57.888169    7285 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:57.888211    7285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:55:57.888238    7285 main.go:141] libmachine: Decoding PEM data...
	I0915 11:55:57.888247    7285 main.go:141] libmachine: Parsing certificate...
	I0915 11:55:57.888800    7285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:55:58.046755    7285 main.go:141] libmachine: Creating SSH key...
	I0915 11:55:58.173859    7285 main.go:141] libmachine: Creating Disk image...
	I0915 11:55:58.173865    7285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:55:58.174022    7285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:55:58.183565    7285 main.go:141] libmachine: STDOUT: 
	I0915 11:55:58.183579    7285 main.go:141] libmachine: STDERR: 
	I0915 11:55:58.183644    7285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2 +20000M
	I0915 11:55:58.191491    7285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:55:58.191507    7285 main.go:141] libmachine: STDERR: 
	I0915 11:55:58.191527    7285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:55:58.191532    7285 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:55:58.191545    7285 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:55:58.191573    7285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a7:3b:db:d6:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:55:58.193280    7285 main.go:141] libmachine: STDOUT: 
	I0915 11:55:58.193294    7285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:55:58.193323    7285 client.go:171] duration metric: took 305.27425ms to LocalClient.Create
	I0915 11:56:00.195502    7285 start.go:128] duration metric: took 2.334747666s to createHost
	I0915 11:56:00.195624    7285 start.go:83] releasing machines lock for "newest-cni-221000", held for 2.334908042s
	W0915 11:56:00.195682    7285 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:00.219116    7285 out.go:177] * Deleting "newest-cni-221000" in qemu2 ...
	W0915 11:56:00.273699    7285 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:00.273726    7285 start.go:729] Will try again in 5 seconds ...
	I0915 11:56:05.275904    7285 start.go:360] acquireMachinesLock for newest-cni-221000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:56:05.276326    7285 start.go:364] duration metric: took 344.75µs to acquireMachinesLock for "newest-cni-221000"
	I0915 11:56:05.276509    7285 start.go:93] Provisioning new machine with config: &{Name:newest-cni-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 11:56:05.276784    7285 start.go:125] createHost starting for "" (driver="qemu2")
	I0915 11:56:05.281629    7285 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0915 11:56:05.331106    7285 start.go:159] libmachine.API.Create for "newest-cni-221000" (driver="qemu2")
	I0915 11:56:05.331166    7285 client.go:168] LocalClient.Create starting
	I0915 11:56:05.331302    7285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/ca.pem
	I0915 11:56:05.331369    7285 main.go:141] libmachine: Decoding PEM data...
	I0915 11:56:05.331388    7285 main.go:141] libmachine: Parsing certificate...
	I0915 11:56:05.331444    7285 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19648-1650/.minikube/certs/cert.pem
	I0915 11:56:05.331490    7285 main.go:141] libmachine: Decoding PEM data...
	I0915 11:56:05.331502    7285 main.go:141] libmachine: Parsing certificate...
	I0915 11:56:05.332514    7285 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso...
	I0915 11:56:05.502568    7285 main.go:141] libmachine: Creating SSH key...
	I0915 11:56:05.739156    7285 main.go:141] libmachine: Creating Disk image...
	I0915 11:56:05.739165    7285 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0915 11:56:05.739418    7285 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2.raw /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:56:05.749290    7285 main.go:141] libmachine: STDOUT: 
	I0915 11:56:05.749314    7285 main.go:141] libmachine: STDERR: 
	I0915 11:56:05.749381    7285 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2 +20000M
	I0915 11:56:05.757318    7285 main.go:141] libmachine: STDOUT: Image resized.
	
	I0915 11:56:05.757333    7285 main.go:141] libmachine: STDERR: 
	I0915 11:56:05.757343    7285 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:56:05.757350    7285 main.go:141] libmachine: Starting QEMU VM...
	I0915 11:56:05.757363    7285 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:56:05.757390    7285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:ec:97:f7:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:56:05.758953    7285 main.go:141] libmachine: STDOUT: 
	I0915 11:56:05.758967    7285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:56:05.758980    7285 client.go:171] duration metric: took 427.811542ms to LocalClient.Create
	I0915 11:56:07.761068    7285 start.go:128] duration metric: took 2.484266208s to createHost
	I0915 11:56:07.761128    7285 start.go:83] releasing machines lock for "newest-cni-221000", held for 2.484796791s
	W0915 11:56:07.761442    7285 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-221000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:07.777051    7285 out.go:201] 
	W0915 11:56:07.781278    7285 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:56:07.781304    7285 out.go:270] * 
	* 
	W0915 11:56:07.783868    7285 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:56:07.799244    7285 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (69.716917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-294000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (31.119833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-294000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.756166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-294000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-294000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (29.295625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-294000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (28.885375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1: exit status 83 (49.678792ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-294000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-294000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:56:00.515335    7311 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:56:00.515462    7311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:00.515466    7311 out.go:358] Setting ErrFile to fd 2...
	I0915 11:56:00.515468    7311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:00.515596    7311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:56:00.515831    7311 out.go:352] Setting JSON to false
	I0915 11:56:00.515838    7311 mustload.go:65] Loading cluster: default-k8s-diff-port-294000
	I0915 11:56:00.516056    7311 config.go:182] Loaded profile config "default-k8s-diff-port-294000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:56:00.520821    7311 out.go:177] * The control-plane node default-k8s-diff-port-294000 host is not running: state=Stopped
	I0915 11:56:00.533103    7311 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-294000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-294000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (30.092333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (28.782208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-294000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.187126166s)

                                                
                                                
-- stdout --
	* [newest-cni-221000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-221000" primary control-plane node in "newest-cni-221000" cluster
	* Restarting existing qemu2 VM for "newest-cni-221000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-221000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:56:11.362773    7367 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:56:11.362904    7367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:11.362907    7367 out.go:358] Setting ErrFile to fd 2...
	I0915 11:56:11.362909    7367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:11.363043    7367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:56:11.364047    7367 out.go:352] Setting JSON to false
	I0915 11:56:11.380215    7367 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5134,"bootTime":1726421437,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:56:11.380285    7367 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:56:11.385420    7367 out.go:177] * [newest-cni-221000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:56:11.393484    7367 notify.go:220] Checking for updates...
	I0915 11:56:11.397448    7367 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:56:11.400345    7367 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:56:11.403429    7367 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:56:11.406439    7367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:56:11.407556    7367 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:56:11.410427    7367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:56:11.413847    7367 config.go:182] Loaded profile config "newest-cni-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:56:11.414136    7367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:56:11.418288    7367 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:56:11.425436    7367 start.go:297] selected driver: qemu2
	I0915 11:56:11.425442    7367 start.go:901] validating driver "qemu2" against &{Name:newest-cni-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-221000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:56:11.425487    7367 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:56:11.427846    7367 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0915 11:56:11.427870    7367 cni.go:84] Creating CNI manager for ""
	I0915 11:56:11.427897    7367 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 11:56:11.427929    7367 start.go:340] cluster config:
	{Name:newest-cni-221000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-221000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:56:11.431526    7367 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 11:56:11.439434    7367 out.go:177] * Starting "newest-cni-221000" primary control-plane node in "newest-cni-221000" cluster
	I0915 11:56:11.443459    7367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 11:56:11.443473    7367 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 11:56:11.443478    7367 cache.go:56] Caching tarball of preloaded images
	I0915 11:56:11.443544    7367 preload.go:172] Found /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 11:56:11.443549    7367 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 11:56:11.443604    7367 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/newest-cni-221000/config.json ...
	I0915 11:56:11.444111    7367 start.go:360] acquireMachinesLock for newest-cni-221000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:56:11.444141    7367 start.go:364] duration metric: took 24.042µs to acquireMachinesLock for "newest-cni-221000"
	I0915 11:56:11.444150    7367 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:56:11.444154    7367 fix.go:54] fixHost starting: 
	I0915 11:56:11.444281    7367 fix.go:112] recreateIfNeeded on newest-cni-221000: state=Stopped err=<nil>
	W0915 11:56:11.444289    7367 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:56:11.448456    7367 out.go:177] * Restarting existing qemu2 VM for "newest-cni-221000" ...
	I0915 11:56:11.456418    7367 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:56:11.456453    7367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:ec:97:f7:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:56:11.458562    7367 main.go:141] libmachine: STDOUT: 
	I0915 11:56:11.458583    7367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:56:11.458617    7367 fix.go:56] duration metric: took 14.460125ms for fixHost
	I0915 11:56:11.458623    7367 start.go:83] releasing machines lock for "newest-cni-221000", held for 14.477083ms
	W0915 11:56:11.458629    7367 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:56:11.458661    7367 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:11.458666    7367 start.go:729] Will try again in 5 seconds ...
	I0915 11:56:16.460951    7367 start.go:360] acquireMachinesLock for newest-cni-221000: {Name:mk3d418f69bc9eba04615835bad027b1acfe5ccd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0915 11:56:16.461387    7367 start.go:364] duration metric: took 327.833µs to acquireMachinesLock for "newest-cni-221000"
	I0915 11:56:16.461531    7367 start.go:96] Skipping create...Using existing machine configuration
	I0915 11:56:16.461551    7367 fix.go:54] fixHost starting: 
	I0915 11:56:16.462312    7367 fix.go:112] recreateIfNeeded on newest-cni-221000: state=Stopped err=<nil>
	W0915 11:56:16.462338    7367 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 11:56:16.468269    7367 out.go:177] * Restarting existing qemu2 VM for "newest-cni-221000" ...
	I0915 11:56:16.473116    7367 qemu.go:418] Using hvf for hardware acceleration
	I0915 11:56:16.473401    7367 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:20:ec:97:f7:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/newest-cni-221000/disk.qcow2
	I0915 11:56:16.483479    7367 main.go:141] libmachine: STDOUT: 
	I0915 11:56:16.483545    7367 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0915 11:56:16.483646    7367 fix.go:56] duration metric: took 22.095ms for fixHost
	I0915 11:56:16.483671    7367 start.go:83] releasing machines lock for "newest-cni-221000", held for 22.258417ms
	W0915 11:56:16.483896    7367 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-221000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-221000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0915 11:56:16.492138    7367 out.go:201] 
	W0915 11:56:16.497225    7367 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0915 11:56:16.497256    7367 out.go:270] * 
	* 
	W0915 11:56:16.499219    7367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 11:56:16.510168    7367 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-221000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (55.64525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-221000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (33.07275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-221000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-221000 --alsologtostderr -v=1: exit status 83 (44.265166ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-221000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-221000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:56:16.687923    7387 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:56:16.688061    7387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:16.688064    7387 out.go:358] Setting ErrFile to fd 2...
	I0915 11:56:16.688067    7387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:56:16.688198    7387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:56:16.688442    7387 out.go:352] Setting JSON to false
	I0915 11:56:16.688451    7387 mustload.go:65] Loading cluster: newest-cni-221000
	I0915 11:56:16.688692    7387 config.go:182] Loaded profile config "newest-cni-221000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:56:16.692134    7387 out.go:177] * The control-plane node newest-cni-221000 host is not running: state=Stopped
	I0915 11:56:16.696063    7387 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-221000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-221000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (30.823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-221000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (30.770042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-221000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 6.83
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 196.09
29 TestAddons/serial/Volcano 38.47
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 17.42
35 TestAddons/parallel/InspektorGadget 10.28
36 TestAddons/parallel/MetricsServer 5.29
39 TestAddons/parallel/CSI 58.66
40 TestAddons/parallel/Headlamp 17.65
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 53.97
43 TestAddons/parallel/NvidiaDevicePlugin 6.18
44 TestAddons/parallel/Yakd 10.26
45 TestAddons/StoppedEnableDisable 9.39
53 TestHyperKitDriverInstallOrUpdate 10.74
56 TestErrorSpam/setup 35.81
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.65
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 55.26
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 76.77
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.58
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.68
73 TestFunctional/serial/CacheCmd/cache/add_local 1.81
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.03
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 2.02
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
81 TestFunctional/serial/ExtraConfig 37.15
82 TestFunctional/serial/ComponentHealth 0.05
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.58
85 TestFunctional/serial/InvalidService 4.32
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 7.72
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.12
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.93
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.45
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.4
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
111 TestFunctional/parallel/License 0.25
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.95
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
121 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
124 TestFunctional/parallel/ServiceCmd/List 0.31
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
127 TestFunctional/parallel/ServiceCmd/Format 0.1
128 TestFunctional/parallel/ServiceCmd/URL 0.1
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
130 TestFunctional/parallel/ProfileCmd/profile_list 0.14
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
132 TestFunctional/parallel/MountCmd/any-port 5.28
133 TestFunctional/parallel/MountCmd/specific-port 0.94
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.09
135 TestFunctional/parallel/Version/short 0.04
136 TestFunctional/parallel/Version/components 0.24
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.9
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
141 TestFunctional/parallel/ImageCommands/ImageBuild 1.97
142 TestFunctional/parallel/ImageCommands/Setup 1.81
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.9
146 TestFunctional/parallel/DockerEnv/bash 0.28
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.85
161 TestMultiControlPlane/serial/DeployApp 4.33
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 56.84
164 TestMultiControlPlane/serial/NodeLabels 0.16
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.1
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 77.99
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.16
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 0.87
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.49
277 TestNoKubernetes/serial/Stop 2.15
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
294 TestStartStop/group/old-k8s-version/serial/Stop 3.4
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
305 TestStartStop/group/no-preload/serial/Stop 3.3
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
318 TestStartStop/group/embed-certs/serial/Stop 3.06
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.41
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.26
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-011000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-011000: exit status 85 (94.6085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |          |
	|         | -p download-only-011000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 10:55:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 10:55:34.212043    2176 out.go:345] Setting OutFile to fd 1 ...
	I0915 10:55:34.212177    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:34.212181    2176 out.go:358] Setting ErrFile to fd 2...
	I0915 10:55:34.212183    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:34.212316    2176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	W0915 10:55:34.212404    2176 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19648-1650/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19648-1650/.minikube/config/config.json: no such file or directory
	I0915 10:55:34.213709    2176 out.go:352] Setting JSON to true
	I0915 10:55:34.231055    2176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1497,"bootTime":1726421437,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 10:55:34.231141    2176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 10:55:34.236630    2176 out.go:97] [download-only-011000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 10:55:34.236774    2176 notify.go:220] Checking for updates...
	W0915 10:55:34.236847    2176 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 10:55:34.240556    2176 out.go:169] MINIKUBE_LOCATION=19648
	I0915 10:55:34.249651    2176 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:55:34.252562    2176 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 10:55:34.256653    2176 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 10:55:34.259676    2176 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	W0915 10:55:34.265597    2176 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 10:55:34.265808    2176 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 10:55:34.270755    2176 out.go:97] Using the qemu2 driver based on user configuration
	I0915 10:55:34.270776    2176 start.go:297] selected driver: qemu2
	I0915 10:55:34.270792    2176 start.go:901] validating driver "qemu2" against <nil>
	I0915 10:55:34.270869    2176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 10:55:34.272480    2176 out.go:169] Automatically selected the socket_vmnet network
	I0915 10:55:34.278320    2176 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0915 10:55:34.278412    2176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 10:55:34.278466    2176 cni.go:84] Creating CNI manager for ""
	I0915 10:55:34.278510    2176 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 10:55:34.278556    2176 start.go:340] cluster config:
	{Name:download-only-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:55:34.283782    2176 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 10:55:34.287717    2176 out.go:97] Downloading VM boot image ...
	I0915 10:55:34.287733    2176 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/iso/arm64/minikube-v1.34.0-1726358414-19644-arm64.iso
	I0915 10:55:41.114368    2176 out.go:97] Starting "download-only-011000" primary control-plane node in "download-only-011000" cluster
	I0915 10:55:41.114407    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:41.174141    2176 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 10:55:41.174163    2176 cache.go:56] Caching tarball of preloaded images
	I0915 10:55:41.174327    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:41.178499    2176 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 10:55:41.178505    2176 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:41.256964    2176 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0915 10:55:47.706333    2176 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:47.706488    2176 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:48.402640    2176 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0915 10:55:48.402850    2176 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/download-only-011000/config.json ...
	I0915 10:55:48.402870    2176 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/download-only-011000/config.json: {Name:mk0f71c4e23cea7aa16097fd110f28e477dbb5fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 10:55:48.403103    2176 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 10:55:48.403298    2176 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0915 10:55:48.982838    2176 out.go:193] 
	W0915 10:55:48.989031    2176 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19648-1650/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0 0x1048a57a0] Decompressors:map[bz2:0x14000815ae0 gz:0x14000815ae8 tar:0x14000815a50 tar.bz2:0x14000815a60 tar.gz:0x14000815a70 tar.xz:0x14000815aa0 tar.zst:0x14000815ad0 tbz2:0x14000815a60 tgz:0x14000815a70 txz:0x14000815aa0 tzst:0x14000815ad0 xz:0x14000815b00 zip:0x14000815b10 zst:0x14000815b08] Getters:map[file:0x14000065f00 http:0x14000bd8370 https:0x14000bd83c0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0915 10:55:48.989056    2176 out_reason.go:110] 
	W0915 10:55:48.999837    2176 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 10:55:49.003795    2176 out.go:193] 
	
	
	* The control-plane node download-only-011000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-011000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-011000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-082000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-082000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (6.82838775s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-082000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-082000: exit status 85 (80.621125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | -p download-only-011000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| delete  | -p download-only-011000        | download-only-011000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT | 15 Sep 24 10:55 PDT |
	| start   | -o=json --download-only        | download-only-082000 | jenkins | v1.34.0 | 15 Sep 24 10:55 PDT |                     |
	|         | -p download-only-082000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 10:55:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 10:55:49.419029    2208 out.go:345] Setting OutFile to fd 1 ...
	I0915 10:55:49.419161    2208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:49.419165    2208 out.go:358] Setting ErrFile to fd 2...
	I0915 10:55:49.419167    2208 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 10:55:49.419283    2208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 10:55:49.420357    2208 out.go:352] Setting JSON to true
	I0915 10:55:49.436503    2208 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1512,"bootTime":1726421437,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 10:55:49.436575    2208 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 10:55:49.441346    2208 out.go:97] [download-only-082000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 10:55:49.441449    2208 notify.go:220] Checking for updates...
	I0915 10:55:49.445232    2208 out.go:169] MINIKUBE_LOCATION=19648
	I0915 10:55:49.448381    2208 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 10:55:49.453413    2208 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 10:55:49.456404    2208 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 10:55:49.459321    2208 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	W0915 10:55:49.465212    2208 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 10:55:49.465376    2208 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 10:55:49.468303    2208 out.go:97] Using the qemu2 driver based on user configuration
	I0915 10:55:49.468313    2208 start.go:297] selected driver: qemu2
	I0915 10:55:49.468317    2208 start.go:901] validating driver "qemu2" against <nil>
	I0915 10:55:49.468367    2208 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 10:55:49.471336    2208 out.go:169] Automatically selected the socket_vmnet network
	I0915 10:55:49.474905    2208 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0915 10:55:49.475007    2208 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 10:55:49.475025    2208 cni.go:84] Creating CNI manager for ""
	I0915 10:55:49.475048    2208 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 10:55:49.475059    2208 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 10:55:49.475096    2208 start.go:340] cluster config:
	{Name:download-only-082000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-082000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 10:55:49.478485    2208 iso.go:125] acquiring lock: {Name:mk02a3cfbc014d2eb68fe361ac5bc6496711d31d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 10:55:49.481323    2208 out.go:97] Starting "download-only-082000" primary control-plane node in "download-only-082000" cluster
	I0915 10:55:49.481332    2208 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:55:49.538090    2208 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 10:55:49.538116    2208 cache.go:56] Caching tarball of preloaded images
	I0915 10:55:49.538286    2208 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 10:55:49.543465    2208 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0915 10:55:49.543473    2208 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:49.620126    2208 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0915 10:55:54.336294    2208 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0915 10:55:54.336681    2208 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19648-1650/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-082000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-082000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-082000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-620000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-620000: exit status 85 (63.065709ms)

                                                
                                                
-- stdout --
	* Profile "addons-620000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-620000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-620000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-620000: exit status 85 (59.321833ms)

                                                
                                                
-- stdout --
	* Profile "addons-620000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-620000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (196.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-620000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-620000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m16.087746167s)
--- PASS: TestAddons/Setup (196.09s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.345834ms
addons_test.go:897: volcano-scheduler stabilized in 7.375667ms
addons_test.go:913: volcano-controller stabilized in 7.389917ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-drr5l" [53b1f094-fb0a-425b-9d52-0d370e455268] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005188625s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-q6h7t" [a74b98b2-d1aa-4dd9-a313-00db8aa9b05e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006925417s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-m8q7d" [bd48dc69-f61c-4ac3-a937-c15ad7365e95] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004194292s
addons_test.go:932: (dbg) Run:  kubectl --context addons-620000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-620000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-620000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d4c277b3-715d-460c-8e00-084d453d8f35] Pending
helpers_test.go:344: "test-job-nginx-0" [d4c277b3-715d-460c-8e00-084d453d8f35] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d4c277b3-715d-460c-8e00-084d453d8f35] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.006065s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable volcano --alsologtostderr -v=1: (10.224450958s)
--- PASS: TestAddons/serial/Volcano (38.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-620000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-620000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-620000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-620000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-620000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3e366452-43ea-4b45-98ea-1e4346f84e70] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3e366452-43ea-4b45-98ea-1e4346f84e70] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004850333s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-620000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable ingress --alsologtostderr -v=1: (7.247532125s)
--- PASS: TestAddons/parallel/Ingress (17.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-75nxv" [9a02ea4f-ce76-48fc-93fa-55b3c9239cc6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011235709s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-620000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-620000: (5.267322667s)
--- PASS: TestAddons/parallel/InspektorGadget (10.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.247208ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h9hbf" [bad8f796-8cfd-42f6-a05a-6dea9f543306] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013144333s
addons_test.go:417: (dbg) Run:  kubectl --context addons-620000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.605417ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-620000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-620000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [74638219-d5c6-4c2a-835d-7692eba5e76a] Pending
helpers_test.go:344: "task-pv-pod" [74638219-d5c6-4c2a-835d-7692eba5e76a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [74638219-d5c6-4c2a-835d-7692eba5e76a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.007700125s
addons_test.go:590: (dbg) Run:  kubectl --context addons-620000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-620000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-620000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-620000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-620000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-620000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-620000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [935a668f-321a-4c2f-aaf6-521ac1e81637] Pending
helpers_test.go:344: "task-pv-pod-restore" [935a668f-321a-4c2f-aaf6-521ac1e81637] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [935a668f-321a-4c2f-aaf6-521ac1e81637] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.011535667s
addons_test.go:632: (dbg) Run:  kubectl --context addons-620000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-620000 delete pod task-pv-pod-restore: (1.2431775s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-620000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-620000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.111794084s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-620000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-trzk6" [e1297fe3-1a1e-49dc-acd8-1648b821f3fc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-trzk6" [e1297fe3-1a1e-49dc-acd8-1648b821f3fc] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.01001s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable headlamp --alsologtostderr -v=1: (5.298157209s)
--- PASS: TestAddons/parallel/Headlamp (17.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-56nj4" [0e4bf259-0291-4b5d-b325-b9e7652bae77] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01055775s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-620000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.97s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-620000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-620000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fc061113-b803-4c81-8cae-863beb01b7c2] Pending
helpers_test.go:344: "test-local-path" [fc061113-b803-4c81-8cae-863beb01b7c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fc061113-b803-4c81-8cae-863beb01b7c2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fc061113-b803-4c81-8cae-863beb01b7c2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.00630325s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-620000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 ssh "cat /opt/local-path-provisioner/pvc-2e0b5cc7-3949-4897-86b6-c44b91c321d9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-620000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-620000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.4550285s)
--- PASS: TestAddons/parallel/LocalPath (53.97s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xbbmc" [d25e5994-a4f9-4ec0-b2c7-60b234f58eea] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006400959s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-620000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.18s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zbnv6" [3ed9671f-b962-42fb-acdd-6476be7da5c9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00550675s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-620000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-620000 addons disable yakd --alsologtostderr -v=1: (5.252376791s)
--- PASS: TestAddons/parallel/Yakd (10.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-620000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-620000: (9.20533125s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-620000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-620000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-620000
--- PASS: TestAddons/StoppedEnableDisable (9.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.74s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.74s)

                                                
                                    
x
+
TestErrorSpam/setup (35.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-615000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-615000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 --driver=qemu2 : (35.810269042s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (55.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop: (3.200498083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop: (26.029801417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-615000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-615000 stop: (26.030077333s)
--- PASS: TestErrorSpam/stop (55.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19648-1650/.minikube/files/etc/test/nested/copy/2174/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.77s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m16.772364542s)
--- PASS: TestFunctional/serial/StartWithProxy (76.77s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --alsologtostderr -v=8: (36.575791416s)
functional_test.go:663: soft start took 36.576286666s for "functional-737000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-737000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.1: (1.032372209s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local286555650/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache add minikube-local-cache-test:functional-737000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 cache add minikube-local-cache-test:functional-737000: (1.495177625s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache delete minikube-local-cache-test:functional-737000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-737000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (70.740625ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 kubectl -- --context functional-737000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 kubectl -- --context functional-737000 get pods: (2.016687583s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.02s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-737000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-737000 get pods: (1.023710042s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0915 11:14:13.093287    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.101189    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.114758    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.137057    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.180626    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.262533    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.426156    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:13.749773    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:14.393532    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:15.677164    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:18.240642    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:14:23.364301    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-737000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.149268792s)
functional_test.go:761: restart took 37.149355792s for "functional-737000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-737000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2057305394/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-737000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-737000: exit status 115 (145.4285ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31385 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-737000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-737000 delete -f testdata/invalidsvc.yaml: (1.07374775s)
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 config get cpus: exit status 14 (30.8235ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 config get cpus: exit status 14 (30.683291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-737000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-737000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3397: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.556875ms)

                                                
                                                
-- stdout --
	* [functional-737000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:15:09.776374    3384 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:15:09.776507    3384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.776510    3384 out.go:358] Setting ErrFile to fd 2...
	I0915 11:15:09.776512    3384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.776642    3384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:15:09.777665    3384 out.go:352] Setting JSON to false
	I0915 11:15:09.794396    3384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2672,"bootTime":1726421437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:15:09.794469    3384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:15:09.798923    3384 out.go:177] * [functional-737000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0915 11:15:09.805982    3384 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:15:09.806014    3384 notify.go:220] Checking for updates...
	I0915 11:15:09.812838    3384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:15:09.815857    3384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:15:09.818900    3384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:15:09.821891    3384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:15:09.824930    3384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:15:09.828138    3384 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:15:09.828392    3384 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:15:09.832872    3384 out.go:177] * Using the qemu2 driver based on existing profile
	I0915 11:15:09.839837    3384 start.go:297] selected driver: qemu2
	I0915 11:15:09.839842    3384 start.go:901] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:15:09.839888    3384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:15:09.845994    3384 out.go:201] 
	W0915 11:15:09.849733    3384 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 11:15:09.853883    3384 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-737000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.8ms)

                                                
                                                
-- stdout --
	* [functional-737000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 11:15:09.650974    3380 out.go:345] Setting OutFile to fd 1 ...
	I0915 11:15:09.651096    3380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.651100    3380 out.go:358] Setting ErrFile to fd 2...
	I0915 11:15:09.651102    3380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 11:15:09.651221    3380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
	I0915 11:15:09.652599    3380 out.go:352] Setting JSON to false
	I0915 11:15:09.670669    3380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2672,"bootTime":1726421437,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0915 11:15:09.670750    3380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 11:15:09.675931    3380 out.go:177] * [functional-737000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0915 11:15:09.684853    3380 out.go:177]   - MINIKUBE_LOCATION=19648
	I0915 11:15:09.684894    3380 notify.go:220] Checking for updates...
	I0915 11:15:09.693839    3380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	I0915 11:15:09.696925    3380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0915 11:15:09.699882    3380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 11:15:09.702918    3380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	I0915 11:15:09.705837    3380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 11:15:09.709208    3380 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 11:15:09.709469    3380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 11:15:09.713818    3380 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0915 11:15:09.720881    3380 start.go:297] selected driver: qemu2
	I0915 11:15:09.720888    3380 start.go:901] validating driver "qemu2" against &{Name:functional-737000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19644/minikube-v1.34.0-1726358414-19644-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-737000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 11:15:09.720930    3380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 11:15:09.727793    3380 out.go:201] 
	W0915 11:15:09.731930    3380 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 11:15:09.735754    3380 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4f38dd71-5f87-480f-97be-0a1da9db02d3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003331459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-737000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-737000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7e76583f-bec3-4a53-b203-adafe22fbbaf] Pending
helpers_test.go:344: "sp-pod" [7e76583f-bec3-4a53-b203-adafe22fbbaf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7e76583f-bec3-4a53-b203-adafe22fbbaf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.011339292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-737000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-737000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [651b2aa9-e396-474a-96dc-5fcf4ee7663b] Pending
helpers_test.go:344: "sp-pod" [651b2aa9-e396-474a-96dc-5fcf4ee7663b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [651b2aa9-e396-474a-96dc-5fcf4ee7663b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009731667s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-737000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -n functional-737000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cp functional-737000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2756604134/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -n functional-737000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -n functional-737000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2174/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/test/nested/copy/2174/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2174.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/2174.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2174.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /usr/share/ca-certificates/2174.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/21742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/21742.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/21742.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /usr/share/ca-certificates/21742.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-737000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "sudo systemctl is-active crio": exit status 1 (63.209916ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3225: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-737000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f093bec4-c546-4de6-a1f6-f688095ae58e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0915 11:14:33.605522    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [f093bec4-c546-4de6-a1f6-f688095ae58e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003662417s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-737000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.49.191 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-737000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-737000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-737000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-kppnr" [b4bcf037-e61f-41c0-a225-a1f9516847ff] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-kppnr" [b4bcf037-e61f-41c0-a225-a1f9516847ff] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009326083s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service list -o json
functional_test.go:1494: Took "293.377916ms" to run "out/minikube-darwin-arm64 -p functional-737000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30133
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30133
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "104.906333ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "36.594167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "85.435708ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "35.42575ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2705294038/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726424102078454000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2705294038/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726424102078454000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2705294038/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726424102078454000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2705294038/001/test-1726424102078454000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 18:15 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 18:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 18:15 test-1726424102078454000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh cat /mount-9p/test-1726424102078454000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-737000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [38f1f4c1-21ef-4710-a1a0-564a5a041d9a] Pending
helpers_test.go:344: "busybox-mount" [38f1f4c1-21ef-4710-a1a0-564a5a041d9a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [38f1f4c1-21ef-4710-a1a0-564a5a041d9a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [38f1f4c1-21ef-4710-a1a0-564a5a041d9a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003566666s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-737000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2705294038/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3481278637/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.431917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3481278637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "sudo umount -f /mount-9p": exit status 1 (61.404959ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-737000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3481278637/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1: exit status 1 (75.009583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-737000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-737000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1502658325/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-737000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-737000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format short --alsologtostderr:
I0915 11:15:23.293608    3548 out.go:345] Setting OutFile to fd 1 ...
I0915 11:15:23.293736    3548 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.293739    3548 out.go:358] Setting ErrFile to fd 2...
I0915 11:15:23.293741    3548 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.293874    3548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:15:23.294325    3548 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.294391    3548 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.294671    3548 retry.go:31] will retry after 795.713267ms: connect: dial unix /Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/monitor: connect: connection refused
I0915 11:15:24.094499    3548 ssh_runner.go:195] Run: systemctl --version
I0915 11:15:24.094538    3548 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/id_rsa Username:docker}
I0915 11:15:24.134797    3548 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-737000 | c6d786d403f00 | 30B    |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kicbase/echo-server               | functional-737000 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format table --alsologtostderr:
I0915 11:15:24.257281    3560 out.go:345] Setting OutFile to fd 1 ...
I0915 11:15:24.257475    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:24.257478    3560 out.go:358] Setting ErrFile to fd 2...
I0915 11:15:24.257480    3560 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:24.257614    3560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:15:24.258037    3560 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:24.258101    3560 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:24.258993    3560 ssh_runner.go:195] Run: systemctl --version
I0915 11:15:24.259009    3560 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/id_rsa Username:docker}
I0915 11:15:24.283660    3560 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"c6d786d403f00521eadfd9f1ef34f86c939eabec9c47f87f32784abe910d1346","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-737000"],"size":"30
"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-737000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746d
ae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"si
ze":"91600000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format json --alsologtostderr:
I0915 11:15:24.187178    3558 out.go:345] Setting OutFile to fd 1 ...
I0915 11:15:24.187312    3558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:24.187318    3558 out.go:358] Setting ErrFile to fd 2...
I0915 11:15:24.187321    3558 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:24.187468    3558 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:15:24.187923    3558 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:24.187986    3558 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:24.188802    3558 ssh_runner.go:195] Run: systemctl --version
I0915 11:15:24.188809    3558 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/id_rsa Username:docker}
I0915 11:15:24.215474    3558 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-737000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: c6d786d403f00521eadfd9f1ef34f86c939eabec9c47f87f32784abe910d1346
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-737000
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image ls --format yaml --alsologtostderr:
I0915 11:15:23.293504    3549 out.go:345] Setting OutFile to fd 1 ...
I0915 11:15:23.293679    3549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.293682    3549 out.go:358] Setting ErrFile to fd 2...
I0915 11:15:23.293684    3549 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.293828    3549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:15:23.294253    3549 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.294316    3549 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.295165    3549 ssh_runner.go:195] Run: systemctl --version
I0915 11:15:23.295172    3549 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/id_rsa Username:docker}
I0915 11:15:23.319979    3549 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-737000 ssh pgrep buildkitd: exit status 1 (59.156292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr: (1.834257042s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-737000 image build -t localhost/my-image:functional-737000 testdata/build --alsologtostderr:
I0915 11:15:23.429020    3556 out.go:345] Setting OutFile to fd 1 ...
I0915 11:15:23.429235    3556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.429238    3556 out.go:358] Setting ErrFile to fd 2...
I0915 11:15:23.429241    3556 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 11:15:23.429368    3556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19648-1650/.minikube/bin
I0915 11:15:23.429801    3556 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.430599    3556 config.go:182] Loaded profile config "functional-737000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 11:15:23.431461    3556 ssh_runner.go:195] Run: systemctl --version
I0915 11:15:23.431470    3556 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19648-1650/.minikube/machines/functional-737000/id_rsa Username:docker}
I0915 11:15:23.456706    3556 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1363894620.tar
I0915 11:15:23.456778    3556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 11:15:23.460323    3556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1363894620.tar
I0915 11:15:23.461856    3556 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1363894620.tar: stat -c "%s %y" /var/lib/minikube/build/build.1363894620.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1363894620.tar': No such file or directory
I0915 11:15:23.461873    3556 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1363894620.tar --> /var/lib/minikube/build/build.1363894620.tar (3072 bytes)
I0915 11:15:23.471011    3556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1363894620
I0915 11:15:23.474696    3556 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1363894620 -xf /var/lib/minikube/build/build.1363894620.tar
I0915 11:15:23.478459    3556 docker.go:360] Building image: /var/lib/minikube/build/build.1363894620
I0915 11:15:23.478528    3556 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-737000 /var/lib/minikube/build/build.1363894620
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:244aa554c8b31b1eca3c6e359e6d53171a9d893580aaa5e5c612f2a3b99b4ccc done
#8 naming to localhost/my-image:functional-737000 done
#8 DONE 0.0s
I0915 11:15:25.221384    3556 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-737000 /var/lib/minikube/build/build.1363894620: (1.742896083s)
I0915 11:15:25.221460    3556 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1363894620
I0915 11:15:25.225461    3556 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1363894620.tar
I0915 11:15:25.228708    3556 build_images.go:217] Built localhost/my-image:functional-737000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.1363894620.tar
I0915 11:15:25.228723    3556 build_images.go:133] succeeded building to: functional-737000
I0915 11:15:25.228727    3556 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.789569625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-737000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon kicbase/echo-server:functional-737000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon kicbase/echo-server:functional-737000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-737000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load --daemon kicbase/echo-server:functional-737000 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-darwin-arm64 -p functional-737000 image load --daemon kicbase/echo-server:functional-737000 --alsologtostderr: (1.0158015s)
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-737000 docker-env) && out/minikube-darwin-arm64 status -p functional-737000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-737000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image save kicbase/echo-server:functional-737000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image rm kicbase/echo-server:functional-737000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-737000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-737000 image save --daemon kicbase/echo-server:functional-737000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-737000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-737000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-737000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-737000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-748000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0915 11:15:35.050900    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:16:57.006673    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-748000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.664958125s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-748000 -- rollout status deployment/busybox: (2.890856875s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-7q7dp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-gphzt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-j5hgc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-7q7dp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-gphzt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-j5hgc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-7q7dp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-gphzt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-j5hgc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-7q7dp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-7q7dp -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-gphzt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-gphzt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-j5hgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-748000 -- exec busybox-7dff88458-j5hgc -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-748000 -v=7 --alsologtostderr
E0915 11:19:13.120512    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-748000 -v=7 --alsologtostderr: (56.627747125s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-748000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp testdata/cp-test.txt ha-748000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2833298096/001/cp-test_ha-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000:/home/docker/cp-test.txt ha-748000-m02:/home/docker/cp-test_ha-748000_ha-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test_ha-748000_ha-748000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000:/home/docker/cp-test.txt ha-748000-m03:/home/docker/cp-test_ha-748000_ha-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test_ha-748000_ha-748000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000:/home/docker/cp-test.txt ha-748000-m04:/home/docker/cp-test_ha-748000_ha-748000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test_ha-748000_ha-748000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp testdata/cp-test.txt ha-748000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2833298096/001/cp-test_ha-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m02:/home/docker/cp-test.txt ha-748000:/home/docker/cp-test_ha-748000-m02_ha-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test_ha-748000-m02_ha-748000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m02:/home/docker/cp-test.txt ha-748000-m03:/home/docker/cp-test_ha-748000-m02_ha-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test_ha-748000-m02_ha-748000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m02:/home/docker/cp-test.txt ha-748000-m04:/home/docker/cp-test_ha-748000-m02_ha-748000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test_ha-748000-m02_ha-748000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp testdata/cp-test.txt ha-748000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2833298096/001/cp-test_ha-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m03:/home/docker/cp-test.txt ha-748000:/home/docker/cp-test_ha-748000-m03_ha-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test_ha-748000-m03_ha-748000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m03:/home/docker/cp-test.txt ha-748000-m02:/home/docker/cp-test_ha-748000-m03_ha-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test_ha-748000-m03_ha-748000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m03:/home/docker/cp-test.txt ha-748000-m04:/home/docker/cp-test_ha-748000-m03_ha-748000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test_ha-748000-m03_ha-748000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp testdata/cp-test.txt ha-748000-m04:/home/docker/cp-test.txt
E0915 11:19:29.780415    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:29.788246    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:29.801663    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:19:29.824997    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test.txt"
E0915 11:19:29.868487    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2833298096/001/cp-test_ha-748000-m04.txt
E0915 11:19:29.952090    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m04:/home/docker/cp-test.txt ha-748000:/home/docker/cp-test_ha-748000-m04_ha-748000.txt
E0915 11:19:30.114242    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000 "sudo cat /home/docker/cp-test_ha-748000-m04_ha-748000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m04:/home/docker/cp-test.txt ha-748000-m02:/home/docker/cp-test_ha-748000-m04_ha-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test.txt"
E0915 11:19:30.437420    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m02 "sudo cat /home/docker/cp-test_ha-748000-m04_ha-748000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 cp ha-748000-m04:/home/docker/cp-test.txt ha-748000-m03:/home/docker/cp-test_ha-748000-m04_ha-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-748000 ssh -n ha-748000-m03 "sudo cat /home/docker/cp-test_ha-748000-m04_ha-748000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0915 11:29:13.106505    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/addons-620000/client.crt: no such file or directory" logger="UnhandledError"
E0915 11:29:29.767105    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.990945542s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (77.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-309000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-309000 --output=json --user=testUser: (3.162420792s)
--- PASS: TestJSONOutput/stop/Command (3.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-301000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-301000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.865167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3b88fe45-eb01-4f8e-8479-64f0014c58fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-301000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ed20ccb-5a78-42c0-ab39-4734d093c9e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19648"}}
	{"specversion":"1.0","id":"55224c2d-dbd3-4753-ae0b-848339d87077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig"}}
	{"specversion":"1.0","id":"ee4df114-0f58-4052-807c-3a45730920c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6fb5f84d-9796-488d-a68b-011a82ed591c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e4722adf-60e1-4f59-92b7-cddbcfe655d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube"}}
	{"specversion":"1.0","id":"0d3e61e3-3e5a-4dc5-9412-7405834a2952","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d393cfa-f1c6-4c2d-841e-945c7dbba630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-301000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-301000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-324000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.641917ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-324000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19648
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19648-1650/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19648-1650/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-324000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-324000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.052583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-324000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-324000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E0915 11:52:32.906438    2174 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19648-1650/.minikube/profiles/functional-737000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.707660083s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.78308425s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-324000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-324000: (2.148072916s)
--- PASS: TestNoKubernetes/serial/Stop (2.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-324000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-324000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.108292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-324000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-324000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-515000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-634000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-634000 --alsologtostderr -v=3: (3.398167709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-634000 -n old-k8s-version-634000: exit status 7 (42.417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-634000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-331000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-331000 --alsologtostderr -v=3: (3.295909375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-331000 -n no-preload-331000: exit status 7 (50.875209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-331000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-526000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-526000 --alsologtostderr -v=3: (3.057195125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-294000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-294000 --alsologtostderr -v=3: (3.405326166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-526000 -n embed-certs-526000: exit status 7 (56.237959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-526000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-294000 -n default-k8s-diff-port-294000: exit status 7 (55.201417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-294000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-221000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-221000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-221000 --alsologtostderr -v=3: (3.259702083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-221000 -n newest-cni-221000: exit status 7 (61.332792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-221000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-271000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-271000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-271000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-271000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-271000"

                                                
                                                
----------------------- debugLogs end: cilium-271000 [took: 2.20252625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-271000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-271000
--- SKIP: TestNetworkPlugins/group/cilium (2.31s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-362000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-362000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard