Test Report: QEMU_macOS 19326

                    
                      35e58bd4f2346c2fce1feaa9162990386c1fdc2b:2024-07-25:35495
                    
                

Test fail (97/282)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 21.89
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.08
55 TestCertOptions 10.06
56 TestCertExpiration 195.24
57 TestDockerFlags 10.21
58 TestForceSystemdFlag 10.09
59 TestForceSystemdEnv 10.38
104 TestFunctional/parallel/ServiceCmdConnect 31.22
176 TestMultiControlPlane/serial/StopSecondaryNode 214.12
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 103.25
178 TestMultiControlPlane/serial/RestartSecondaryNode 209.06
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.42
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.01
183 TestMultiControlPlane/serial/StopCluster 202.09
184 TestMultiControlPlane/serial/RestartCluster 5.25
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
186 TestMultiControlPlane/serial/AddSecondaryNode 0.07
190 TestImageBuild/serial/Setup 9.91
193 TestJSONOutput/start/Command 10.04
199 TestJSONOutput/pause/Command 0.08
205 TestJSONOutput/unpause/Command 0.04
222 TestMinikubeProfile 10.1
225 TestMountStart/serial/StartWithMountFirst 10.23
228 TestMultiNode/serial/FreshStart2Nodes 10.07
229 TestMultiNode/serial/DeployApp2Nodes 115.52
230 TestMultiNode/serial/PingHostFrom2Pods 0.09
231 TestMultiNode/serial/AddNode 0.07
232 TestMultiNode/serial/MultiNodeLabels 0.06
233 TestMultiNode/serial/ProfileList 0.07
234 TestMultiNode/serial/CopyFile 0.06
235 TestMultiNode/serial/StopNode 0.13
236 TestMultiNode/serial/StartAfterStop 49.84
237 TestMultiNode/serial/RestartKeepsNodes 9.19
238 TestMultiNode/serial/DeleteNode 0.1
239 TestMultiNode/serial/StopMultiNode 3.62
240 TestMultiNode/serial/RestartMultiNode 5.25
241 TestMultiNode/serial/ValidateNameConflict 20
245 TestPreload 9.88
247 TestScheduledStopUnix 10.28
248 TestSkaffold 12.68
251 TestRunningBinaryUpgrade 592.25
253 TestKubernetesUpgrade 18.63
266 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.68
267 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.38
269 TestStoppedBinaryUpgrade/Upgrade 574.87
271 TestPause/serial/Start 9.85
281 TestNoKubernetes/serial/StartWithK8s 9.97
282 TestNoKubernetes/serial/StartWithStopK8s 5.27
283 TestNoKubernetes/serial/Start 5.32
287 TestNoKubernetes/serial/StartNoArgs 5.35
289 TestNetworkPlugins/group/auto/Start 9.86
290 TestNetworkPlugins/group/calico/Start 9.89
291 TestNetworkPlugins/group/custom-flannel/Start 9.79
292 TestNetworkPlugins/group/false/Start 9.78
293 TestNetworkPlugins/group/kindnet/Start 9.73
294 TestNetworkPlugins/group/flannel/Start 9.85
295 TestNetworkPlugins/group/enable-default-cni/Start 9.83
296 TestNetworkPlugins/group/bridge/Start 10.04
297 TestNetworkPlugins/group/kubenet/Start 9.82
299 TestStartStop/group/old-k8s-version/serial/FirstStart 9.88
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
305 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
309 TestStartStop/group/old-k8s-version/serial/Pause 0.1
311 TestStartStop/group/no-preload/serial/FirstStart 9.78
312 TestStartStop/group/no-preload/serial/DeployApp 0.09
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
316 TestStartStop/group/no-preload/serial/SecondStart 5.23
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/no-preload/serial/Pause 0.1
322 TestStartStop/group/embed-certs/serial/FirstStart 9.84
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.12
325 TestStartStop/group/embed-certs/serial/DeployApp 0.1
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
329 TestStartStop/group/embed-certs/serial/SecondStart 5.33
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.59
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
338 TestStartStop/group/embed-certs/serial/Pause 0.1
340 TestStartStop/group/newest-cni/serial/FirstStart 9.92
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
349 TestStartStop/group/newest-cni/serial/SecondStart 5.26
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
353 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (21.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-493000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-493000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (21.887593041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6d2f68f1-ce6c-4be0-aa7d-cb1565af168d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-493000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f40b177d-5c23-4d02-9a6d-f20525e2e8d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"6a2036e6-cdb1-47b4-911e-8242645b3236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig"}}
	{"specversion":"1.0","id":"b0e601f5-cb2f-4aea-b190-0ddeffd2d64e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1267afa1-183f-42fb-b08c-60330a40b5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03b6f5ca-baf3-48b9-8091-f33e51d599eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube"}}
	{"specversion":"1.0","id":"a808daf6-d6b7-4755-8436-368c755f3510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e0ec9dfa-5593-4571-96f5-2e4c022a9559","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c49f6524-2f83-48ae-a3c6-8af9dccc26e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ff453b49-8380-4280-b0e4-1a2b583b90fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"71a7f8ae-f3b0-4dd9-8352-2f3916cf4b86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-493000\" primary control-plane node in \"download-only-493000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"48e1da30-5891-48c5-98cc-db32510e54bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ad48248-6535-40c0-8afc-98b379b719d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60] Decompressors:map[bz2:0x1400000fdd0 gz:0x1400000fdd8 tar:0x1400000fd40 tar.bz2:0x1400000fd70 tar.gz:0x1400000fd80 tar.xz:0x1400000fd90 tar.zst:0x1400000fdc0 tbz2:0x1400000fd70 tgz:0x14
00000fd80 txz:0x1400000fd90 tzst:0x1400000fdc0 xz:0x1400000fe00 zip:0x1400000fe20 zst:0x1400000fe08] Getters:map[file:0x1400054c600 http:0x14000754640 https:0x14000754690] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f29b5a2d-d400-4d11-9305-4cd9d9a399db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:27:54.232158    1696 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:27:54.232311    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:27:54.232314    1696 out.go:304] Setting ErrFile to fd 2...
	I0725 10:27:54.232316    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:27:54.232454    1696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	W0725 10:27:54.232549    1696 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19326-1196/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19326-1196/.minikube/config/config.json: no such file or directory
	I0725 10:27:54.233853    1696 out.go:298] Setting JSON to true
	I0725 10:27:54.251118    1696 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1638,"bootTime":1721926836,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:27:54.251191    1696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:27:54.256477    1696 out.go:97] [download-only-493000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:27:54.256610    1696 notify.go:220] Checking for updates...
	W0725 10:27:54.256618    1696 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 10:27:54.260429    1696 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:27:54.263546    1696 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:27:54.268579    1696 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:27:54.271529    1696 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:27:54.274540    1696 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	W0725 10:27:54.280494    1696 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:27:54.280746    1696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:27:54.285515    1696 out.go:97] Using the qemu2 driver based on user configuration
	I0725 10:27:54.285534    1696 start.go:297] selected driver: qemu2
	I0725 10:27:54.285547    1696 start.go:901] validating driver "qemu2" against <nil>
	I0725 10:27:54.285618    1696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:27:54.288481    1696 out.go:169] Automatically selected the socket_vmnet network
	I0725 10:27:54.294168    1696 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0725 10:27:54.294258    1696 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:27:54.294286    1696 cni.go:84] Creating CNI manager for ""
	I0725 10:27:54.294304    1696 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 10:27:54.294350    1696 start.go:340] cluster config:
	{Name:download-only-493000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:27:54.299489    1696 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:27:54.303546    1696 out.go:97] Downloading VM boot image ...
	I0725 10:27:54.303564    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0725 10:28:03.296045    1696 out.go:97] Starting "download-only-493000" primary control-plane node in "download-only-493000" cluster
	I0725 10:28:03.296074    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:03.369179    1696 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:03.369185    1696 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:03.369333    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:03.373441    1696 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0725 10:28:03.373453    1696 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:03.453884    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:14.959629    1696 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:14.959790    1696 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:15.656838    1696 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0725 10:28:15.657029    1696 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-493000/config.json ...
	I0725 10:28:15.657048    1696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-493000/config.json: {Name:mkcbe285ca3d49455fafab46dbe6de1c059a254e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:28:15.657291    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:15.657488    1696 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0725 10:28:16.046820    1696 out.go:169] 
	W0725 10:28:16.052974    1696 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60] Decompressors:map[bz2:0x1400000fdd0 gz:0x1400000fdd8 tar:0x1400000fd40 tar.bz2:0x1400000fd70 tar.gz:0x1400000fd80 tar.xz:0x1400000fd90 tar.zst:0x1400000fdc0 tbz2:0x1400000fd70 tgz:0x1400000fd80 txz:0x1400000fd90 tzst:0x1400000fdc0 xz:0x1400000fe00 zip:0x1400000fe20 zst:0x1400000fe08] Getters:map[file:0x1400054c600 http:0x14000754640 https:0x14000754690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0725 10:28:16.052998    1696 out_reason.go:110] 
	W0725 10:28:16.059853    1696 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:28:16.062967    1696 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-493000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (21.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.08s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-009000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-009000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.927622375s)

                                                
                                                
-- stdout --
	* [offline-docker-009000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-009000" primary control-plane node in "offline-docker-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:07:26.595932    4385 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:07:26.596070    4385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:26.596073    4385 out.go:304] Setting ErrFile to fd 2...
	I0725 11:07:26.596076    4385 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:26.596225    4385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:07:26.597414    4385 out.go:298] Setting JSON to false
	I0725 11:07:26.615245    4385 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4010,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:07:26.615321    4385 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:07:26.621024    4385 out.go:177] * [offline-docker-009000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:07:26.626951    4385 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:07:26.626987    4385 notify.go:220] Checking for updates...
	I0725 11:07:26.632876    4385 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:07:26.635899    4385 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:07:26.638909    4385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:07:26.641877    4385 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:07:26.644904    4385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:07:26.648306    4385 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:26.648375    4385 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:07:26.651981    4385 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:07:26.658919    4385 start.go:297] selected driver: qemu2
	I0725 11:07:26.658934    4385 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:07:26.658942    4385 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:07:26.660853    4385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:07:26.663881    4385 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:07:26.667068    4385 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:07:26.667086    4385 cni.go:84] Creating CNI manager for ""
	I0725 11:07:26.667094    4385 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:07:26.667098    4385 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:07:26.667142    4385 start.go:340] cluster config:
	{Name:offline-docker-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:07:26.670827    4385 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:07:26.677786    4385 out.go:177] * Starting "offline-docker-009000" primary control-plane node in "offline-docker-009000" cluster
	I0725 11:07:26.681918    4385 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:07:26.681942    4385 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:07:26.681952    4385 cache.go:56] Caching tarball of preloaded images
	I0725 11:07:26.682012    4385 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:07:26.682017    4385 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:07:26.682075    4385 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/offline-docker-009000/config.json ...
	I0725 11:07:26.682085    4385 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/offline-docker-009000/config.json: {Name:mk48ee0898a716d581caf108d8fbfcf9509e2237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:07:26.682358    4385 start.go:360] acquireMachinesLock for offline-docker-009000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:26.682392    4385 start.go:364] duration metric: took 26.166µs to acquireMachinesLock for "offline-docker-009000"
	I0725 11:07:26.682406    4385 start.go:93] Provisioning new machine with config: &{Name:offline-docker-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:26.682432    4385 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:26.686882    4385 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:26.702923    4385 start.go:159] libmachine.API.Create for "offline-docker-009000" (driver="qemu2")
	I0725 11:07:26.702955    4385 client.go:168] LocalClient.Create starting
	I0725 11:07:26.703039    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:26.703075    4385 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:26.703084    4385 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:26.703130    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:26.703158    4385 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:26.703169    4385 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:26.703555    4385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:26.853088    4385 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:27.077599    4385 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:27.077610    4385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:27.077797    4385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:27.087560    4385 main.go:141] libmachine: STDOUT: 
	I0725 11:07:27.087578    4385 main.go:141] libmachine: STDERR: 
	I0725 11:07:27.087640    4385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2 +20000M
	I0725 11:07:27.098414    4385 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:27.098430    4385 main.go:141] libmachine: STDERR: 
	I0725 11:07:27.098446    4385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:27.098451    4385 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:27.098473    4385 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:27.098503    4385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:8d:fd:d7:d1:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:27.100327    4385 main.go:141] libmachine: STDOUT: 
	I0725 11:07:27.100344    4385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:27.100364    4385 client.go:171] duration metric: took 397.414333ms to LocalClient.Create
	I0725 11:07:29.100400    4385 start.go:128] duration metric: took 2.418029791s to createHost
	I0725 11:07:29.100429    4385 start.go:83] releasing machines lock for "offline-docker-009000", held for 2.418104875s
	W0725 11:07:29.100441    4385 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:29.106386    4385 out.go:177] * Deleting "offline-docker-009000" in qemu2 ...
	W0725 11:07:29.118172    4385 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:29.118185    4385 start.go:729] Will try again in 5 seconds ...
	I0725 11:07:34.120236    4385 start.go:360] acquireMachinesLock for offline-docker-009000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:34.120750    4385 start.go:364] duration metric: took 411.709µs to acquireMachinesLock for "offline-docker-009000"
	I0725 11:07:34.120889    4385 start.go:93] Provisioning new machine with config: &{Name:offline-docker-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:34.121212    4385 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:34.130636    4385 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:34.183815    4385 start.go:159] libmachine.API.Create for "offline-docker-009000" (driver="qemu2")
	I0725 11:07:34.183868    4385 client.go:168] LocalClient.Create starting
	I0725 11:07:34.183994    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:34.184066    4385 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:34.184090    4385 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:34.184170    4385 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:34.184214    4385 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:34.184229    4385 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:34.184751    4385 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:34.354051    4385 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:34.431994    4385 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:34.431999    4385 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:34.432165    4385 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:34.441181    4385 main.go:141] libmachine: STDOUT: 
	I0725 11:07:34.441198    4385 main.go:141] libmachine: STDERR: 
	I0725 11:07:34.441254    4385 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2 +20000M
	I0725 11:07:34.448979    4385 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:34.448994    4385 main.go:141] libmachine: STDERR: 
	I0725 11:07:34.449003    4385 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:34.449009    4385 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:34.449021    4385 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:34.449054    4385 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:00:93:7f:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/offline-docker-009000/disk.qcow2
	I0725 11:07:34.450557    4385 main.go:141] libmachine: STDOUT: 
	I0725 11:07:34.450573    4385 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:34.450586    4385 client.go:171] duration metric: took 266.71975ms to LocalClient.Create
	I0725 11:07:36.452749    4385 start.go:128] duration metric: took 2.331562791s to createHost
	I0725 11:07:36.452841    4385 start.go:83] releasing machines lock for "offline-docker-009000", held for 2.332134625s
	W0725 11:07:36.453217    4385 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:36.464821    4385 out.go:177] 
	W0725 11:07:36.469755    4385 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:07:36.469794    4385 out.go:239] * 
	* 
	W0725 11:07:36.472907    4385 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:07:36.481833    4385 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-009000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-25 11:07:36.497067 -0700 PDT m=+2382.428830376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-009000 -n offline-docker-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-009000 -n offline-docker-009000: exit status 7 (66.079875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-009000
--- FAIL: TestOffline (10.08s)

                                                
                                    
x
+
TestCertOptions (10.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-810000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-810000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.796132s)

                                                
                                                
-- stdout --
	* [cert-options-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-810000" primary control-plane node in "cert-options-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-810000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-810000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-810000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (82.728292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-810000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-810000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-810000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-810000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-810000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (42.286833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-810000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-810000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-810000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-810000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-25 11:08:07.19386 -0700 PDT m=+2413.126533334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-810000 -n cert-options-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-810000 -n cert-options-810000: exit status 7 (29.931084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-810000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-810000
--- FAIL: TestCertOptions (10.06s)

                                                
                                    
x
+
TestCertExpiration (195.24s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.863547417s)

                                                
                                                
-- stdout --
	* [cert-expiration-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-876000" primary control-plane node in "cert-expiration-876000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-876000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.229331166s)

                                                
                                                
-- stdout --
	* [cert-expiration-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-876000" primary control-plane node in "cert-expiration-876000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-876000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-876000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-876000" primary control-plane node in "cert-expiration-876000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-876000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-876000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-25 11:11:07.271959 -0700 PDT m=+2593.209968917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-876000 -n cert-expiration-876000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-876000 -n cert-expiration-876000: exit status 7 (64.57275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-876000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-876000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-876000
--- FAIL: TestCertExpiration (195.24s)

                                                
                                    
x
+
TestDockerFlags (10.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-463000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-463000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.972349917s)

                                                
                                                
-- stdout --
	* [docker-flags-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-463000" primary control-plane node in "docker-flags-463000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-463000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:07:47.052158    4576 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:07:47.052302    4576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:47.052306    4576 out.go:304] Setting ErrFile to fd 2...
	I0725 11:07:47.052307    4576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:47.052434    4576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:07:47.053455    4576 out.go:298] Setting JSON to false
	I0725 11:07:47.069533    4576 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4031,"bootTime":1721926836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:07:47.069596    4576 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:07:47.075778    4576 out.go:177] * [docker-flags-463000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:07:47.083564    4576 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:07:47.083622    4576 notify.go:220] Checking for updates...
	I0725 11:07:47.091516    4576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:07:47.094545    4576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:07:47.097544    4576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:07:47.100561    4576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:07:47.103539    4576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:07:47.106917    4576 config.go:182] Loaded profile config "force-systemd-flag-964000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:47.106987    4576 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:47.107033    4576 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:07:47.111532    4576 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:07:47.117476    4576 start.go:297] selected driver: qemu2
	I0725 11:07:47.117481    4576 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:07:47.117489    4576 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:07:47.119715    4576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:07:47.124491    4576 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:07:47.127596    4576 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0725 11:07:47.127641    4576 cni.go:84] Creating CNI manager for ""
	I0725 11:07:47.127652    4576 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:07:47.127663    4576 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:07:47.127705    4576 start.go:340] cluster config:
	{Name:docker-flags-463000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:07:47.131610    4576 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:07:47.139518    4576 out.go:177] * Starting "docker-flags-463000" primary control-plane node in "docker-flags-463000" cluster
	I0725 11:07:47.143504    4576 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:07:47.143519    4576 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:07:47.143529    4576 cache.go:56] Caching tarball of preloaded images
	I0725 11:07:47.143593    4576 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:07:47.143599    4576 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:07:47.143658    4576 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/docker-flags-463000/config.json ...
	I0725 11:07:47.143671    4576 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/docker-flags-463000/config.json: {Name:mk8a21fa0de20c11237d684be4ff275496da0fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:07:47.143899    4576 start.go:360] acquireMachinesLock for docker-flags-463000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:47.143937    4576 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "docker-flags-463000"
	I0725 11:07:47.143951    4576 start.go:93] Provisioning new machine with config: &{Name:docker-flags-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:47.143978    4576 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:47.152496    4576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:47.170896    4576 start.go:159] libmachine.API.Create for "docker-flags-463000" (driver="qemu2")
	I0725 11:07:47.170925    4576 client.go:168] LocalClient.Create starting
	I0725 11:07:47.170990    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:47.171024    4576 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:47.171033    4576 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:47.171076    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:47.171101    4576 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:47.171110    4576 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:47.171450    4576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:47.322457    4576 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:47.489510    4576 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:47.489516    4576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:47.489719    4576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:47.499264    4576 main.go:141] libmachine: STDOUT: 
	I0725 11:07:47.499281    4576 main.go:141] libmachine: STDERR: 
	I0725 11:07:47.499342    4576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2 +20000M
	I0725 11:07:47.507197    4576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:47.507214    4576 main.go:141] libmachine: STDERR: 
	I0725 11:07:47.507232    4576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:47.507236    4576 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:47.507248    4576 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:47.507277    4576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:e5:85:b8:63:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:47.508914    4576 main.go:141] libmachine: STDOUT: 
	I0725 11:07:47.508927    4576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:47.508947    4576 client.go:171] duration metric: took 338.026875ms to LocalClient.Create
	I0725 11:07:49.511046    4576 start.go:128] duration metric: took 2.367121375s to createHost
	I0725 11:07:49.511078    4576 start.go:83] releasing machines lock for "docker-flags-463000", held for 2.367198291s
	W0725 11:07:49.511137    4576 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:49.525361    4576 out.go:177] * Deleting "docker-flags-463000" in qemu2 ...
	W0725 11:07:49.555202    4576 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:49.555222    4576 start.go:729] Will try again in 5 seconds ...
	I0725 11:07:54.557300    4576 start.go:360] acquireMachinesLock for docker-flags-463000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:54.618573    4576 start.go:364] duration metric: took 61.14425ms to acquireMachinesLock for "docker-flags-463000"
	I0725 11:07:54.618751    4576 start.go:93] Provisioning new machine with config: &{Name:docker-flags-463000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-463000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:54.619011    4576 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:54.633735    4576 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:54.683792    4576 start.go:159] libmachine.API.Create for "docker-flags-463000" (driver="qemu2")
	I0725 11:07:54.683843    4576 client.go:168] LocalClient.Create starting
	I0725 11:07:54.684317    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:54.684388    4576 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:54.684410    4576 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:54.684479    4576 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:54.684530    4576 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:54.684544    4576 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:54.685239    4576 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:54.848600    4576 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:54.925778    4576 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:54.925783    4576 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:54.925947    4576 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:54.935089    4576 main.go:141] libmachine: STDOUT: 
	I0725 11:07:54.935103    4576 main.go:141] libmachine: STDERR: 
	I0725 11:07:54.935143    4576 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2 +20000M
	I0725 11:07:54.942846    4576 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:54.942859    4576 main.go:141] libmachine: STDERR: 
	I0725 11:07:54.942869    4576 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:54.942873    4576 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:54.942882    4576 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:54.942905    4576 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:e8:41:88:28:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/docker-flags-463000/disk.qcow2
	I0725 11:07:54.944468    4576 main.go:141] libmachine: STDOUT: 
	I0725 11:07:54.944488    4576 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:54.944501    4576 client.go:171] duration metric: took 260.661584ms to LocalClient.Create
	I0725 11:07:56.946664    4576 start.go:128] duration metric: took 2.327695583s to createHost
	I0725 11:07:56.946718    4576 start.go:83] releasing machines lock for "docker-flags-463000", held for 2.328169041s
	W0725 11:07:56.947092    4576 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-463000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:56.963897    4576 out.go:177] 
	W0725 11:07:56.970789    4576 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:07:56.970816    4576 out.go:239] * 
	* 
	W0725 11:07:56.973666    4576 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:07:56.982696    4576 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-463000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-463000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-463000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (78.817292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-463000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-463000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-463000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-463000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-463000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-463000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-463000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-463000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-463000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.413625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-463000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-463000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-463000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-463000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-463000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-463000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-25 11:07:57.122531 -0700 PDT m=+2403.054906209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-463000 -n docker-flags-463000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-463000 -n docker-flags-463000: exit status 7 (28.952125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-463000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-463000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-463000
--- FAIL: TestDockerFlags (10.21s)

                                                
                                    
x
+
TestForceSystemdFlag (10.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-964000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-964000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.901707584s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-964000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-964000" primary control-plane node in "force-systemd-flag-964000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-964000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:07:42.113326    4552 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:07:42.113432    4552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:42.113435    4552 out.go:304] Setting ErrFile to fd 2...
	I0725 11:07:42.113437    4552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:42.113568    4552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:07:42.114618    4552 out.go:298] Setting JSON to false
	I0725 11:07:42.130715    4552 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4026,"bootTime":1721926836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:07:42.130809    4552 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:07:42.136632    4552 out.go:177] * [force-systemd-flag-964000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:07:42.143520    4552 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:07:42.143554    4552 notify.go:220] Checking for updates...
	I0725 11:07:42.150573    4552 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:07:42.153576    4552 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:07:42.156592    4552 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:07:42.159583    4552 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:07:42.162532    4552 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:07:42.165938    4552 config.go:182] Loaded profile config "force-systemd-env-029000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:42.166010    4552 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:42.166063    4552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:07:42.170558    4552 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:07:42.177561    4552 start.go:297] selected driver: qemu2
	I0725 11:07:42.177568    4552 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:07:42.177574    4552 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:07:42.179903    4552 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:07:42.182548    4552 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:07:42.185581    4552 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 11:07:42.185623    4552 cni.go:84] Creating CNI manager for ""
	I0725 11:07:42.185631    4552 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:07:42.185635    4552 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:07:42.185668    4552 start.go:340] cluster config:
	{Name:force-systemd-flag-964000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:07:42.189298    4552 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:07:42.196519    4552 out.go:177] * Starting "force-systemd-flag-964000" primary control-plane node in "force-systemd-flag-964000" cluster
	I0725 11:07:42.200476    4552 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:07:42.200496    4552 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:07:42.200507    4552 cache.go:56] Caching tarball of preloaded images
	I0725 11:07:42.200576    4552 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:07:42.200581    4552 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:07:42.200630    4552 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/force-systemd-flag-964000/config.json ...
	I0725 11:07:42.200642    4552 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/force-systemd-flag-964000/config.json: {Name:mk75b1a2de8766e7326a8b8b5f521851c103ff70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:07:42.200972    4552 start.go:360] acquireMachinesLock for force-systemd-flag-964000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:42.201007    4552 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "force-systemd-flag-964000"
	I0725 11:07:42.201019    4552 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:42.201054    4552 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:42.205559    4552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:42.222104    4552 start.go:159] libmachine.API.Create for "force-systemd-flag-964000" (driver="qemu2")
	I0725 11:07:42.222129    4552 client.go:168] LocalClient.Create starting
	I0725 11:07:42.222185    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:42.222215    4552 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:42.222223    4552 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:42.222265    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:42.222287    4552 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:42.222295    4552 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:42.222775    4552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:42.373152    4552 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:42.429053    4552 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:42.429059    4552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:42.429248    4552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:42.438584    4552 main.go:141] libmachine: STDOUT: 
	I0725 11:07:42.438601    4552 main.go:141] libmachine: STDERR: 
	I0725 11:07:42.438651    4552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2 +20000M
	I0725 11:07:42.446444    4552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:42.446460    4552 main.go:141] libmachine: STDERR: 
	I0725 11:07:42.446474    4552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:42.446478    4552 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:42.446486    4552 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:42.446513    4552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:c0:69:ca:7e:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:42.448114    4552 main.go:141] libmachine: STDOUT: 
	I0725 11:07:42.448128    4552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:42.448146    4552 client.go:171] duration metric: took 226.019542ms to LocalClient.Create
	I0725 11:07:44.450344    4552 start.go:128] duration metric: took 2.249325541s to createHost
	I0725 11:07:44.450435    4552 start.go:83] releasing machines lock for "force-systemd-flag-964000", held for 2.249481417s
	W0725 11:07:44.450497    4552 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:44.475714    4552 out.go:177] * Deleting "force-systemd-flag-964000" in qemu2 ...
	W0725 11:07:44.497142    4552 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:44.497162    4552 start.go:729] Will try again in 5 seconds ...
	I0725 11:07:49.499193    4552 start.go:360] acquireMachinesLock for force-systemd-flag-964000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:49.511260    4552 start.go:364] duration metric: took 11.867167ms to acquireMachinesLock for "force-systemd-flag-964000"
	I0725 11:07:49.511361    4552 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-964000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-964000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:49.511630    4552 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:49.521144    4552 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:49.572550    4552 start.go:159] libmachine.API.Create for "force-systemd-flag-964000" (driver="qemu2")
	I0725 11:07:49.572607    4552 client.go:168] LocalClient.Create starting
	I0725 11:07:49.572730    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:49.572798    4552 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:49.572813    4552 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:49.572870    4552 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:49.572913    4552 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:49.572929    4552 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:49.573595    4552 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:49.737351    4552 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:49.915573    4552 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:49.915582    4552 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:49.915776    4552 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:49.925161    4552 main.go:141] libmachine: STDOUT: 
	I0725 11:07:49.925187    4552 main.go:141] libmachine: STDERR: 
	I0725 11:07:49.925234    4552 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2 +20000M
	I0725 11:07:49.933139    4552 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:49.933160    4552 main.go:141] libmachine: STDERR: 
	I0725 11:07:49.933175    4552 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:49.933180    4552 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:49.933189    4552 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:49.933214    4552 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:61:71:d2:2c:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-flag-964000/disk.qcow2
	I0725 11:07:49.934802    4552 main.go:141] libmachine: STDOUT: 
	I0725 11:07:49.934816    4552 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:49.934828    4552 client.go:171] duration metric: took 362.226708ms to LocalClient.Create
	I0725 11:07:51.936951    4552 start.go:128] duration metric: took 2.425362583s to createHost
	I0725 11:07:51.937001    4552 start.go:83] releasing machines lock for "force-systemd-flag-964000", held for 2.4257675s
	W0725 11:07:51.937361    4552 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-964000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-964000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:51.954245    4552 out.go:177] 
	W0725 11:07:51.961071    4552 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:07:51.961126    4552 out.go:239] * 
	* 
	W0725 11:07:51.963721    4552 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:07:51.973924    4552 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-964000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-964000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-964000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.897334ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-964000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-964000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-964000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-25 11:07:52.069699 -0700 PDT m=+2398.001924334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-964000 -n force-systemd-flag-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-964000 -n force-systemd-flag-964000: exit status 7 (33.084417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-964000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-964000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-964000
--- FAIL: TestForceSystemdFlag (10.09s)

                                                
                                    
x
+
TestForceSystemdEnv (10.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-029000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-029000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.191338125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-029000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-029000" primary control-plane node in "force-systemd-env-029000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-029000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:07:36.673417    4520 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:07:36.673534    4520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:36.673537    4520 out.go:304] Setting ErrFile to fd 2...
	I0725 11:07:36.673539    4520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:07:36.673663    4520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:07:36.674714    4520 out.go:298] Setting JSON to false
	I0725 11:07:36.691299    4520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4020,"bootTime":1721926836,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:07:36.691404    4520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:07:36.696312    4520 out.go:177] * [force-systemd-env-029000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:07:36.706367    4520 notify.go:220] Checking for updates...
	I0725 11:07:36.709373    4520 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:07:36.717327    4520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:07:36.725241    4520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:07:36.733137    4520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:07:36.741253    4520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:07:36.753260    4520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0725 11:07:36.757625    4520 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:07:36.757666    4520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:07:36.761239    4520 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:07:36.768259    4520 start.go:297] selected driver: qemu2
	I0725 11:07:36.768266    4520 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:07:36.768272    4520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:07:36.770577    4520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:07:36.774280    4520 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:07:36.778419    4520 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 11:07:36.778446    4520 cni.go:84] Creating CNI manager for ""
	I0725 11:07:36.778453    4520 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:07:36.778458    4520 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:07:36.778515    4520 start.go:340] cluster config:
	{Name:force-systemd-env-029000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:07:36.782002    4520 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:07:36.785258    4520 out.go:177] * Starting "force-systemd-env-029000" primary control-plane node in "force-systemd-env-029000" cluster
	I0725 11:07:36.793329    4520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:07:36.793345    4520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:07:36.793359    4520 cache.go:56] Caching tarball of preloaded images
	I0725 11:07:36.793422    4520 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:07:36.793428    4520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:07:36.793542    4520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/force-systemd-env-029000/config.json ...
	I0725 11:07:36.793554    4520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/force-systemd-env-029000/config.json: {Name:mk415782fe8f7bb5d49b6bf00651cf6bf75427be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:07:36.793772    4520 start.go:360] acquireMachinesLock for force-systemd-env-029000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:36.793815    4520 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "force-systemd-env-029000"
	I0725 11:07:36.793827    4520 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:36.793863    4520 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:36.801333    4520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:36.817748    4520 start.go:159] libmachine.API.Create for "force-systemd-env-029000" (driver="qemu2")
	I0725 11:07:36.817784    4520 client.go:168] LocalClient.Create starting
	I0725 11:07:36.817864    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:36.817895    4520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:36.817909    4520 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:36.817961    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:36.817985    4520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:36.817996    4520 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:36.818403    4520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:36.996354    4520 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:37.036664    4520 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:37.036676    4520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:37.036891    4520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:37.046319    4520 main.go:141] libmachine: STDOUT: 
	I0725 11:07:37.046342    4520 main.go:141] libmachine: STDERR: 
	I0725 11:07:37.046400    4520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2 +20000M
	I0725 11:07:37.054503    4520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:37.054517    4520 main.go:141] libmachine: STDERR: 
	I0725 11:07:37.054530    4520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:37.054534    4520 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:37.054548    4520 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:37.054572    4520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:cb:9f:c1:aa:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:37.056181    4520 main.go:141] libmachine: STDOUT: 
	I0725 11:07:37.056195    4520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:37.056218    4520 client.go:171] duration metric: took 238.436875ms to LocalClient.Create
	I0725 11:07:39.058400    4520 start.go:128] duration metric: took 2.26457325s to createHost
	I0725 11:07:39.058461    4520 start.go:83] releasing machines lock for "force-systemd-env-029000", held for 2.264702459s
	W0725 11:07:39.058525    4520 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:39.065954    4520 out.go:177] * Deleting "force-systemd-env-029000" in qemu2 ...
	W0725 11:07:39.094140    4520 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:39.094163    4520 start.go:729] Will try again in 5 seconds ...
	I0725 11:07:44.096283    4520 start.go:360] acquireMachinesLock for force-systemd-env-029000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:44.450592    4520 start.go:364] duration metric: took 354.119375ms to acquireMachinesLock for "force-systemd-env-029000"
	I0725 11:07:44.450714    4520 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-029000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-029000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:44.451046    4520 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:44.463705    4520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0725 11:07:44.512579    4520 start.go:159] libmachine.API.Create for "force-systemd-env-029000" (driver="qemu2")
	I0725 11:07:44.512630    4520 client.go:168] LocalClient.Create starting
	I0725 11:07:44.512751    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:44.512824    4520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:44.512841    4520 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:44.512905    4520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:44.512949    4520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:44.512962    4520 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:44.513641    4520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:44.679972    4520 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:44.772461    4520 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:44.772468    4520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:44.772657    4520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:44.782091    4520 main.go:141] libmachine: STDOUT: 
	I0725 11:07:44.782109    4520 main.go:141] libmachine: STDERR: 
	I0725 11:07:44.782152    4520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2 +20000M
	I0725 11:07:44.789981    4520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:44.789995    4520 main.go:141] libmachine: STDERR: 
	I0725 11:07:44.790022    4520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:44.790026    4520 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:44.790035    4520 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:44.790061    4520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:8e:0e:22:6d:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/force-systemd-env-029000/disk.qcow2
	I0725 11:07:44.791685    4520 main.go:141] libmachine: STDOUT: 
	I0725 11:07:44.791701    4520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:44.791711    4520 client.go:171] duration metric: took 279.083583ms to LocalClient.Create
	I0725 11:07:46.793950    4520 start.go:128] duration metric: took 2.342918167s to createHost
	I0725 11:07:46.794026    4520 start.go:83] releasing machines lock for "force-systemd-env-029000", held for 2.343462084s
	W0725 11:07:46.794362    4520 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-029000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:46.805816    4520 out.go:177] 
	W0725 11:07:46.810700    4520 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:07:46.810746    4520 out.go:239] * 
	* 
	W0725 11:07:46.813238    4520 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:07:46.822786    4520 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-029000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-029000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-029000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.019542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-029000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-029000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-029000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-25 11:07:46.915528 -0700 PDT m=+2392.847600709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-029000 -n force-systemd-env-029000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-029000 -n force-systemd-env-029000: exit status 7 (33.794375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-029000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-029000
--- FAIL: TestForceSystemdEnv (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-963000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-963000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-jtg5v" [b08f7430-f439-4432-86e5-b81aaca8302f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-jtg5v" [b08f7430-f439-4432-86e5-b81aaca8302f] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004027833s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31808
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31808: Get "http://192.168.105.4:31808": dial tcp 192.168.105.4:31808: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-963000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-jtg5v
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-963000/192.168.105.4
Start Time:       Thu, 25 Jul 2024 10:39:27 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://64d7a6df0738bddd5d8d59839c459292f0e3eae2f9b1c552e8a5b1223cf9b257
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 25 Jul 2024 10:39:47 -0700
Finished:     Thu, 25 Jul 2024 10:39:47 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 25 Jul 2024 10:39:31 -0700
Finished:     Thu, 25 Jul 2024 10:39:31 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-plrb6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-plrb6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-jtg5v to functional-963000
Normal   Pulling    30s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     27s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.184s (3.184s including waiting). Image size: 84957542 bytes.
Normal   Created    10s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    10s (x3 over 27s)  kubelet            Started container echoserver-arm
Normal   Pulled     10s (x2 over 26s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x3 over 25s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-jtg5v_default(b08f7430-f439-4432-86e5-b81aaca8302f)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-963000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-963000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.122.86
IPs:                      10.110.122.86
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31808/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-963000 -n functional-963000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-963000 ssh -- ls                                                                                          | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh cat                                                                                            | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | /mount-9p/test-1721929188411197000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh stat                                                                                           | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh stat                                                                                           | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh sudo                                                                                           | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1160503561/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh -- ls                                                                                          | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh sudo                                                                                           | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount2    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount1    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount3    |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-963000 ssh findmnt                                                                                        | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT | 25 Jul 24 10:39 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-963000                                                                                                 | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-963000 --dry-run                                                                                       | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-963000 | jenkins | v1.33.1 | 25 Jul 24 10:39 PDT |                     |
	|           | -p functional-963000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:39:56
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:39:56.650772    2632 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:39:56.650904    2632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.650908    2632 out.go:304] Setting ErrFile to fd 2...
	I0725 10:39:56.650911    2632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.651042    2632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:39:56.652072    2632 out.go:298] Setting JSON to false
	I0725 10:39:56.669020    2632 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2360,"bootTime":1721926836,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:39:56.669121    2632 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:39:56.672228    2632 out.go:177] * [functional-963000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:39:56.679272    2632 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:39:56.679296    2632 notify.go:220] Checking for updates...
	I0725 10:39:56.686211    2632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:39:56.689189    2632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:39:56.692212    2632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:39:56.695258    2632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 10:39:56.698199    2632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:39:56.701506    2632 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:39:56.701764    2632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:39:56.706211    2632 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 10:39:56.713224    2632 start.go:297] selected driver: qemu2
	I0725 10:39:56.713231    2632 start.go:901] validating driver "qemu2" against &{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:39:56.713303    2632 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:39:56.715569    2632 cni.go:84] Creating CNI manager for ""
	I0725 10:39:56.715583    2632 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 10:39:56.715620    2632 start.go:340] cluster config:
	{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:39:56.727179    2632 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Jul 25 17:39:50 functional-963000 cri-dockerd[6511]: time="2024-07-25T17:39:50Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.013788586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.013818262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.013826181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.013854315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.046716794Z" level=info msg="shim disconnected" id=3f0eaeb3345f990c2297e0f3895e8e34a78fd79d7b3a08e39cbe9e6f126c987d namespace=moby
	Jul 25 17:39:51 functional-963000 dockerd[6253]: time="2024-07-25T17:39:51.046839915Z" level=info msg="ignoring event" container=3f0eaeb3345f990c2297e0f3895e8e34a78fd79d7b3a08e39cbe9e6f126c987d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.046888513Z" level=warning msg="cleaning up after shim disconnected" id=3f0eaeb3345f990c2297e0f3895e8e34a78fd79d7b3a08e39cbe9e6f126c987d namespace=moby
	Jul 25 17:39:51 functional-963000 dockerd[6260]: time="2024-07-25T17:39:51.046894223Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 25 17:39:52 functional-963000 dockerd[6253]: time="2024-07-25T17:39:52.215395746Z" level=info msg="ignoring event" container=ea7c87a2bcf7a648c8121524e9fc730adfb47cf628bafebb4b3e7002074f5882 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 17:39:52 functional-963000 dockerd[6260]: time="2024-07-25T17:39:52.215643906Z" level=info msg="shim disconnected" id=ea7c87a2bcf7a648c8121524e9fc730adfb47cf628bafebb4b3e7002074f5882 namespace=moby
	Jul 25 17:39:52 functional-963000 dockerd[6260]: time="2024-07-25T17:39:52.215738393Z" level=warning msg="cleaning up after shim disconnected" id=ea7c87a2bcf7a648c8121524e9fc730adfb47cf628bafebb4b3e7002074f5882 namespace=moby
	Jul 25 17:39:52 functional-963000 dockerd[6260]: time="2024-07-25T17:39:52.215755565Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.718595597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.718625940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.718631567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.718665202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.746451749Z" level=info msg="shim disconnected" id=7a985c31946df77d127d2b3b0d177d70ce6a01d863482b241f5de6945712644a namespace=moby
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.746496680Z" level=warning msg="cleaning up after shim disconnected" id=7a985c31946df77d127d2b3b0d177d70ce6a01d863482b241f5de6945712644a namespace=moby
	Jul 25 17:39:53 functional-963000 dockerd[6260]: time="2024-07-25T17:39:53.746500973Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 25 17:39:53 functional-963000 dockerd[6253]: time="2024-07-25T17:39:53.747478022Z" level=info msg="ignoring event" container=7a985c31946df77d127d2b3b0d177d70ce6a01d863482b241f5de6945712644a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 25 17:39:55 functional-963000 cri-dockerd[6511]: time="2024-07-25T17:39:55Z" level=error msg="error getting RW layer size for container ID '741e3e1d33932a6bd11fd0b02496947571d62a30d67a46e690323e0ee0c511fd': Error response from daemon: No such container: 741e3e1d33932a6bd11fd0b02496947571d62a30d67a46e690323e0ee0c511fd"
	Jul 25 17:39:55 functional-963000 cri-dockerd[6511]: time="2024-07-25T17:39:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '741e3e1d33932a6bd11fd0b02496947571d62a30d67a46e690323e0ee0c511fd'"
	Jul 25 17:39:55 functional-963000 cri-dockerd[6511]: time="2024-07-25T17:39:55Z" level=error msg="error getting RW layer size for container ID '16dfd4783fb221e05e7739c4c9f9771b4ebfd1daf70ca947f845efec2ebfd71b': Error response from daemon: No such container: 16dfd4783fb221e05e7739c4c9f9771b4ebfd1daf70ca947f845efec2ebfd71b"
	Jul 25 17:39:55 functional-963000 cri-dockerd[6511]: time="2024-07-25T17:39:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '16dfd4783fb221e05e7739c4c9f9771b4ebfd1daf70ca947f845efec2ebfd71b'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7a985c31946df       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   87e798f0da1bb       hello-node-65f5d5cc78-kfp9f
	3f0eaeb3345f9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 seconds ago        Exited              mount-munger              0                   ea7c87a2bcf7a       busybox-mount
	64d7a6df0738b       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   6be85bd482dc2       hello-node-connect-6f49f58cd5-jtg5v
	d15d655f1eb22       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                         23 seconds ago       Running             myfrontend                0                   5e18c9d65c07c       sp-pod
	0e561797133eb       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         39 seconds ago       Running             nginx                     0                   d7f8de63b1051       nginx-svc
	a0a68ab32aa9b       2437cf7621777                                                                                         About a minute ago   Running             coredns                   2                   1dad4d6960403       coredns-7db6d8ff4d-kp4gr
	6fea5f7dcd9ed       2351f570ed0ea                                                                                         About a minute ago   Running             kube-proxy                2                   4330d85c3d959       kube-proxy-nskk4
	1eb06df2800cc       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   61287eccee6af       storage-provisioner
	4697ddaf1b586       d48f992a22722                                                                                         About a minute ago   Running             kube-scheduler            2                   5c759a4884ae0       kube-scheduler-functional-963000
	6edc53f88a470       014faa467e297                                                                                         About a minute ago   Running             etcd                      2                   6f253e39f6ae0       etcd-functional-963000
	27913fcc1e1e7       61773190d42ff                                                                                         About a minute ago   Running             kube-apiserver            0                   fb739e0102e4b       kube-apiserver-functional-963000
	9b9e43a00a959       8e97cdb19e7cc                                                                                         About a minute ago   Running             kube-controller-manager   2                   811a6656ca54a       kube-controller-manager-functional-963000
	13fddf9e54d86       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       2                   e878c3fcac186       storage-provisioner
	0540123798c2e       2437cf7621777                                                                                         2 minutes ago        Exited              coredns                   1                   215bb3efd5b52       coredns-7db6d8ff4d-kp4gr
	866892d538584       2351f570ed0ea                                                                                         2 minutes ago        Exited              kube-proxy                1                   bd366cf6feebb       kube-proxy-nskk4
	a2e42ba1ff40f       d48f992a22722                                                                                         2 minutes ago        Exited              kube-scheduler            1                   967672f1eac17       kube-scheduler-functional-963000
	266accd433d6e       014faa467e297                                                                                         2 minutes ago        Exited              etcd                      1                   58a9849e1bfb7       etcd-functional-963000
	a8b36cb2d145e       8e97cdb19e7cc                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   430c0757f0202       kube-controller-manager-functional-963000
	
	
	==> coredns [0540123798c2] <==
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51703 - 5347 "HINFO IN 5283711312173179607.4668503181572480635. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009939309s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[741195604]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:37:40.453) (total time: 30000ms):
	Trace[741195604]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:38:10.453)
	Trace[741195604]: [30.000445291s] [30.000445291s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1867804338]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:37:40.453) (total time: 30000ms):
	Trace[1867804338]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:38:10.453)
	Trace[1867804338]: [30.000443652s] [30.000443652s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2105526014]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (25-Jul-2024 17:37:40.453) (total time: 30000ms):
	Trace[2105526014]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (17:38:10.453)
	Trace[2105526014]: [30.000517981s] [30.000517981s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a0a68ab32aa9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57173 - 6786 "HINFO IN 6911561200612050405.120792029681290333. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.008877611s
	[INFO] 10.244.0.1:55987 - 16586 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00009724s
	[INFO] 10.244.0.1:13998 - 16933 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000080485s
	[INFO] 10.244.0.1:22569 - 11235 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029468s
	[INFO] 10.244.0.1:32544 - 37039 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001037799s
	[INFO] 10.244.0.1:38928 - 48074 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.0000749s
	[INFO] 10.244.0.1:14669 - 22610 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000115996s
	
	
	==> describe nodes <==
	Name:               functional-963000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-963000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=functional-963000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T10_37_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 17:37:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-963000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 17:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 17:39:50 +0000   Thu, 25 Jul 2024 17:37:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 17:39:50 +0000   Thu, 25 Jul 2024 17:37:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 17:39:50 +0000   Thu, 25 Jul 2024 17:37:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 17:39:50 +0000   Thu, 25 Jul 2024 17:37:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-963000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 6313d904400a447ba879ce7d5e909ad8
	  System UUID:                6313d904400a447ba879ce7d5e909ad8
	  Boot ID:                    fd2988c4-9451-4c2a-8a9d-0123f7862835
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.0
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-kfp9f                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	  default                     hello-node-connect-6f49f58cd5-jtg5v          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 coredns-7db6d8ff4d-kp4gr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m38s
	  kube-system                 etcd-functional-963000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-apiserver-functional-963000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-functional-963000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-proxy-nskk4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kube-scheduler-functional-963000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-hzq98    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-7tsl7        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m38s                  kube-proxy       
	  Normal  Starting                 67s                    kube-proxy       
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node functional-963000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node functional-963000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node functional-963000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m53s                  kubelet          Node functional-963000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m53s                  kubelet          Node functional-963000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s                  kubelet          Node functional-963000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m53s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m49s                  kubelet          Node functional-963000 status is now: NodeReady
	  Normal  RegisteredNode           2m40s                  node-controller  Node functional-963000 event: Registered Node functional-963000 in Controller
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m21s)  kubelet          Node functional-963000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m21s)  kubelet          Node functional-963000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m21s)  kubelet          Node functional-963000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-963000 event: Registered Node functional-963000 in Controller
	  Normal  Starting                 72s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s (x8 over 72s)      kubelet          Node functional-963000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x8 over 72s)      kubelet          Node functional-963000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x7 over 72s)      kubelet          Node functional-963000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           57s                    node-controller  Node functional-963000 event: Registered Node functional-963000 in Controller
	
	
	==> dmesg <==
	[  +3.385988] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.723385] kauditd_printk_skb: 34 callbacks suppressed
	[Jul25 17:38] systemd-fstab-generator[5340]: Ignoring "noauto" option for root device
	[ +10.576632] systemd-fstab-generator[5784]: Ignoring "noauto" option for root device
	[  +0.052086] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.101203] systemd-fstab-generator[5818]: Ignoring "noauto" option for root device
	[  +0.106441] systemd-fstab-generator[5830]: Ignoring "noauto" option for root device
	[  +0.100703] systemd-fstab-generator[5844]: Ignoring "noauto" option for root device
	[  +5.101746] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.361863] systemd-fstab-generator[6460]: Ignoring "noauto" option for root device
	[  +0.083887] systemd-fstab-generator[6472]: Ignoring "noauto" option for root device
	[  +0.072592] systemd-fstab-generator[6484]: Ignoring "noauto" option for root device
	[  +0.085660] systemd-fstab-generator[6499]: Ignoring "noauto" option for root device
	[  +0.234340] systemd-fstab-generator[6673]: Ignoring "noauto" option for root device
	[  +0.968161] systemd-fstab-generator[6796]: Ignoring "noauto" option for root device
	[  +1.343378] kauditd_printk_skb: 194 callbacks suppressed
	[ +13.953807] kauditd_printk_skb: 36 callbacks suppressed
	[Jul25 17:39] systemd-fstab-generator[7781]: Ignoring "noauto" option for root device
	[  +6.904261] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.098498] kauditd_printk_skb: 19 callbacks suppressed
	[  +7.671842] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.030865] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.515472] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.479860] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.000340] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [266accd433d6] <==
	{"level":"info","ts":"2024-07-25T17:37:37.633812Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-25T17:37:39.02836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-25T17:37:39.02847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-25T17:37:39.028505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-25T17:37:39.028542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-25T17:37:39.028556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-25T17:37:39.028579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-25T17:37:39.028594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-25T17:37:39.031724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T17:37:39.031738Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-963000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T17:37:39.032077Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T17:37:39.032247Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T17:37:39.032271Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T17:37:39.034677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T17:37:39.034679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-25T17:38:31.734714Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-25T17:38:31.734746Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-963000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-25T17:38:31.734791Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:38:31.734835Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:38:31.744088Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-25T17:38:31.744115Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-25T17:38:31.744143Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-25T17:38:31.746331Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-25T17:38:31.746367Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-25T17:38:31.746371Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-963000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [6edc53f88a47] <==
	{"level":"info","ts":"2024-07-25T17:38:46.650743Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T17:38:46.650775Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-25T17:38:46.650874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-25T17:38:46.650912Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-25T17:38:46.650952Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T17:38:46.650989Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T17:38:46.652704Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T17:38:46.652767Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-25T17:38:46.652817Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-25T17:38:46.653595Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T17:38:46.653613Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T17:38:48.14714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-25T17:38:48.147292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-25T17:38:48.147379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-25T17:38:48.147417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-25T17:38:48.147476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-25T17:38:48.147538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-25T17:38:48.147595Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-25T17:38:48.15001Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-963000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T17:38:48.15014Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T17:38:48.151049Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T17:38:48.151274Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T17:38:48.151697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T17:38:48.156029Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T17:38:48.156268Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 17:39:57 up 3 min,  0 users,  load average: 1.06, 0.66, 0.27
	Linux functional-963000 5.10.207 #1 SMP PREEMPT Tue Jul 23 01:19:38 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [27913fcc1e1e] <==
	I0725 17:38:48.770189       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0725 17:38:48.770192       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0725 17:38:48.770402       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0725 17:38:48.770430       1 aggregator.go:165] initial CRD sync complete...
	I0725 17:38:48.770436       1 autoregister_controller.go:141] Starting autoregister controller
	I0725 17:38:48.770439       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0725 17:38:48.770441       1 cache.go:39] Caches are synced for autoregister controller
	E0725 17:38:48.772155       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0725 17:38:48.790568       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 17:38:49.674736       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 17:38:50.254532       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0725 17:38:50.258413       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0725 17:38:50.269131       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0725 17:38:50.276610       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 17:38:50.278600       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 17:39:00.852498       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 17:39:00.856589       1 controller.go:615] quota admission added evaluator for: endpoints
	I0725 17:39:10.641439       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.123.127"}
	I0725 17:39:15.618226       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.29.119"}
	I0725 17:39:26.989550       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0725 17:39:27.031573       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.122.86"}
	I0725 17:39:40.246352       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.226.117"}
	I0725 17:39:57.235048       1 controller.go:615] quota admission added evaluator for: namespaces
	I0725 17:39:57.382852       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.223.157"}
	I0725 17:39:57.413985       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.127.68"}
	
	
	==> kube-controller-manager [9b9e43a00a95] <==
	I0725 17:39:48.134841       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="25.717µs"
	I0725 17:39:53.686556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="26.008µs"
	I0725 17:39:54.184054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="25.216µs"
	I0725 17:39:57.305820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="15.277686ms"
	E0725 17:39:57.305951       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.314748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.773399ms"
	E0725 17:39:57.314772       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.319994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="14.761236ms"
	E0725 17:39:57.320109       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.320425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.643565ms"
	E0725 17:39:57.320435       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.325271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.146662ms"
	E0725 17:39:57.325292       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.328635       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.326771ms"
	E0725 17:39:57.328656       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.328695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.698801ms"
	E0725 17:39:57.328730       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0725 17:39:57.352847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.26392ms"
	I0725 17:39:57.357892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.838719ms"
	I0725 17:39:57.358008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="27.175µs"
	I0725 17:39:57.364601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="19.631µs"
	I0725 17:39:57.394932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="25.122539ms"
	I0725 17:39:57.404434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.477282ms"
	I0725 17:39:57.404792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="126.247µs"
	I0725 17:39:57.407833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="20.173µs"
	
	
	==> kube-controller-manager [a8b36cb2d145] <==
	I0725 17:37:51.821238       1 shared_informer.go:320] Caches are synced for persistent volume
	I0725 17:37:51.826486       1 shared_informer.go:320] Caches are synced for expand
	I0725 17:37:51.830802       1 shared_informer.go:320] Caches are synced for disruption
	I0725 17:37:51.830808       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0725 17:37:51.832973       1 shared_informer.go:320] Caches are synced for TTL
	I0725 17:37:51.834058       1 shared_informer.go:320] Caches are synced for job
	I0725 17:37:51.836201       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0725 17:37:51.839473       1 shared_informer.go:320] Caches are synced for GC
	I0725 17:37:51.839506       1 shared_informer.go:320] Caches are synced for ephemeral
	I0725 17:37:51.907287       1 shared_informer.go:320] Caches are synced for crt configmap
	I0725 17:37:51.928806       1 shared_informer.go:320] Caches are synced for daemon sets
	I0725 17:37:51.937111       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0725 17:37:51.938414       1 shared_informer.go:320] Caches are synced for taint
	I0725 17:37:51.938525       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0725 17:37:51.938681       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-963000"
	I0725 17:37:51.939013       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0725 17:37:52.053406       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:37:52.060390       1 shared_informer.go:320] Caches are synced for resource quota
	I0725 17:37:52.253225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="416.985307ms"
	I0725 17:37:52.253301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.95µs"
	I0725 17:37:52.466724       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:37:52.531973       1 shared_informer.go:320] Caches are synced for garbage collector
	I0725 17:37:52.532014       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0725 17:38:19.982624       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.726954ms"
	I0725 17:38:19.982910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.552µs"
	
	
	==> kube-proxy [6fea5f7dcd9e] <==
	I0725 17:38:50.209194       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:38:50.226751       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0725 17:38:50.241007       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:38:50.241468       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:38:50.241504       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:38:50.242514       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:38:50.242606       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:38:50.242681       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:38:50.243178       1 config.go:192] "Starting service config controller"
	I0725 17:38:50.243197       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:38:50.243226       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:38:50.243237       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:38:50.243469       1 config.go:319] "Starting node config controller"
	I0725 17:38:50.243482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:38:50.343833       1 shared_informer.go:320] Caches are synced for node config
	I0725 17:38:50.343835       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:38:50.343888       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [866892d53858] <==
	I0725 17:37:40.435077       1 server_linux.go:69] "Using iptables proxy"
	I0725 17:37:40.442792       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0725 17:37:40.456685       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0725 17:37:40.456708       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0725 17:37:40.456718       1 server_linux.go:165] "Using iptables Proxier"
	I0725 17:37:40.457420       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 17:37:40.457492       1 server.go:872] "Version info" version="v1.30.3"
	I0725 17:37:40.457501       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:37:40.457875       1 config.go:192] "Starting service config controller"
	I0725 17:37:40.457885       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 17:37:40.457896       1 config.go:101] "Starting endpoint slice config controller"
	I0725 17:37:40.457898       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 17:37:40.458096       1 config.go:319] "Starting node config controller"
	I0725 17:37:40.458103       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 17:37:40.558133       1 shared_informer.go:320] Caches are synced for node config
	I0725 17:37:40.558169       1 shared_informer.go:320] Caches are synced for service config
	I0725 17:37:40.558442       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4697ddaf1b58] <==
	I0725 17:38:47.003118       1 serving.go:380] Generated self-signed cert in-memory
	W0725 17:38:48.685341       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 17:38:48.685429       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 17:38:48.685452       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 17:38:48.685482       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 17:38:48.720458       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 17:38:48.722176       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:38:48.725206       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 17:38:48.725258       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 17:38:48.725277       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 17:38:48.725270       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 17:38:48.828041       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a2e42ba1ff40] <==
	I0725 17:37:38.030115       1 serving.go:380] Generated self-signed cert in-memory
	W0725 17:37:39.574705       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0725 17:37:39.574803       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 17:37:39.574834       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 17:37:39.574854       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 17:37:39.604777       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0725 17:37:39.604794       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 17:37:39.605512       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0725 17:37:39.605555       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 17:37:39.605713       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0725 17:37:39.605757       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0725 17:37:39.706650       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 17:38:31.728976       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0725 17:38:31.729341       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0725 17:38:31.729400       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0725 17:38:31.729482       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 25 17:39:48 functional-963000 kubelet[6803]: I0725 17:39:48.127296    6803 scope.go:117] "RemoveContainer" containerID="3f6d9df33436816c11b98f1a31b12613e32d3f0689785ba8b298e12690dd331c"
	Jul 25 17:39:48 functional-963000 kubelet[6803]: I0725 17:39:48.127460    6803 scope.go:117] "RemoveContainer" containerID="64d7a6df0738bddd5d8d59839c459292f0e3eae2f9b1c552e8a5b1223cf9b257"
	Jul 25 17:39:48 functional-963000 kubelet[6803]: E0725 17:39:48.127565    6803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-jtg5v_default(b08f7430-f439-4432-86e5-b81aaca8302f)\"" pod="default/hello-node-connect-6f49f58cd5-jtg5v" podUID="b08f7430-f439-4432-86e5-b81aaca8302f"
	Jul 25 17:39:49 functional-963000 kubelet[6803]: I0725 17:39:49.553670    6803 topology_manager.go:215] "Topology Admit Handler" podUID="4dbbd253-d660-4511-84cd-3f6ffeb3912b" podNamespace="default" podName="busybox-mount"
	Jul 25 17:39:49 functional-963000 kubelet[6803]: I0725 17:39:49.720558    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dbbd253-d660-4511-84cd-3f6ffeb3912b-test-volume\") pod \"busybox-mount\" (UID: \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\") " pod="default/busybox-mount"
	Jul 25 17:39:49 functional-963000 kubelet[6803]: I0725 17:39:49.720580    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdgw8\" (UniqueName: \"kubernetes.io/projected/4dbbd253-d660-4511-84cd-3f6ffeb3912b-kube-api-access-sdgw8\") pod \"busybox-mount\" (UID: \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\") " pod="default/busybox-mount"
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.234729    6803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdgw8\" (UniqueName: \"kubernetes.io/projected/4dbbd253-d660-4511-84cd-3f6ffeb3912b-kube-api-access-sdgw8\") pod \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\" (UID: \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\") "
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.234760    6803 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dbbd253-d660-4511-84cd-3f6ffeb3912b-test-volume\") pod \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\" (UID: \"4dbbd253-d660-4511-84cd-3f6ffeb3912b\") "
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.234790    6803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4dbbd253-d660-4511-84cd-3f6ffeb3912b-test-volume" (OuterVolumeSpecName: "test-volume") pod "4dbbd253-d660-4511-84cd-3f6ffeb3912b" (UID: "4dbbd253-d660-4511-84cd-3f6ffeb3912b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.238099    6803 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbbd253-d660-4511-84cd-3f6ffeb3912b-kube-api-access-sdgw8" (OuterVolumeSpecName: "kube-api-access-sdgw8") pod "4dbbd253-d660-4511-84cd-3f6ffeb3912b" (UID: "4dbbd253-d660-4511-84cd-3f6ffeb3912b"). InnerVolumeSpecName "kube-api-access-sdgw8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.335287    6803 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/4dbbd253-d660-4511-84cd-3f6ffeb3912b-test-volume\") on node \"functional-963000\" DevicePath \"\""
	Jul 25 17:39:52 functional-963000 kubelet[6803]: I0725 17:39:52.335299    6803 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sdgw8\" (UniqueName: \"kubernetes.io/projected/4dbbd253-d660-4511-84cd-3f6ffeb3912b-kube-api-access-sdgw8\") on node \"functional-963000\" DevicePath \"\""
	Jul 25 17:39:53 functional-963000 kubelet[6803]: I0725 17:39:53.157721    6803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7c87a2bcf7a648c8121524e9fc730adfb47cf628bafebb4b3e7002074f5882"
	Jul 25 17:39:53 functional-963000 kubelet[6803]: I0725 17:39:53.679960    6803 scope.go:117] "RemoveContainer" containerID="e4834e2b06b58bec95d2aad77e533bb6650e2028d21e34f7757e61a8e31d68c8"
	Jul 25 17:39:54 functional-963000 kubelet[6803]: I0725 17:39:54.164960    6803 scope.go:117] "RemoveContainer" containerID="e4834e2b06b58bec95d2aad77e533bb6650e2028d21e34f7757e61a8e31d68c8"
	Jul 25 17:39:54 functional-963000 kubelet[6803]: I0725 17:39:54.165113    6803 scope.go:117] "RemoveContainer" containerID="7a985c31946df77d127d2b3b0d177d70ce6a01d863482b241f5de6945712644a"
	Jul 25 17:39:54 functional-963000 kubelet[6803]: E0725 17:39:54.165185    6803 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-kfp9f_default(5560f8f4-890d-456c-a1fb-42d0dacaaca7)\"" pod="default/hello-node-65f5d5cc78-kfp9f" podUID="5560f8f4-890d-456c-a1fb-42d0dacaaca7"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.351579    6803 topology_manager.go:215] "Topology Admit Handler" podUID="f28bd23c-fd42-4e61-b16e-f54810e896c7" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-7tsl7"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: E0725 17:39:57.351627    6803 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4dbbd253-d660-4511-84cd-3f6ffeb3912b" containerName="mount-munger"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.351645    6803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbbd253-d660-4511-84cd-3f6ffeb3912b" containerName="mount-munger"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.364352    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxv44\" (UniqueName: \"kubernetes.io/projected/f28bd23c-fd42-4e61-b16e-f54810e896c7-kube-api-access-kxv44\") pod \"kubernetes-dashboard-779776cb65-7tsl7\" (UID: \"f28bd23c-fd42-4e61-b16e-f54810e896c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-7tsl7"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.364380    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f28bd23c-fd42-4e61-b16e-f54810e896c7-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-7tsl7\" (UID: \"f28bd23c-fd42-4e61-b16e-f54810e896c7\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-7tsl7"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.393168    6803 topology_manager.go:215] "Topology Admit Handler" podUID="7128e134-af96-439f-a159-8fda4a7639c8" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-hzq98"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.565559    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7128e134-af96-439f-a159-8fda4a7639c8-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-hzq98\" (UID: \"7128e134-af96-439f-a159-8fda4a7639c8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hzq98"
	Jul 25 17:39:57 functional-963000 kubelet[6803]: I0725 17:39:57.565626    6803 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmdm\" (UniqueName: \"kubernetes.io/projected/7128e134-af96-439f-a159-8fda4a7639c8-kube-api-access-psmdm\") pod \"dashboard-metrics-scraper-b5fc48f67-hzq98\" (UID: \"7128e134-af96-439f-a159-8fda4a7639c8\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-hzq98"
	
	
	==> storage-provisioner [13fddf9e54d8] <==
	I0725 17:37:51.981478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 17:37:51.984629       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 17:37:51.984671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 17:38:09.369848       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 17:38:09.369970       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-963000_691676f2-003f-4052-a361-fc44567e2c85!
	I0725 17:38:09.370565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46d7b6d6-914e-48bc-9d2b-25c35a009bd5", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-963000_691676f2-003f-4052-a361-fc44567e2c85 became leader
	I0725 17:38:09.470817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-963000_691676f2-003f-4052-a361-fc44567e2c85!
	
	
	==> storage-provisioner [1eb06df2800c] <==
	I0725 17:38:50.140089       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 17:38:50.144354       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 17:38:50.144374       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 17:39:07.527506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 17:39:07.527764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46d7b6d6-914e-48bc-9d2b-25c35a009bd5", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-963000_bc552b34-622b-46ba-a2b3-8654ab4f896b became leader
	I0725 17:39:07.528433       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-963000_bc552b34-622b-46ba-a2b3-8654ab4f896b!
	I0725 17:39:07.630629       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-963000_bc552b34-622b-46ba-a2b3-8654ab4f896b!
	I0725 17:39:21.474471       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0725 17:39:21.474504       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    c06494d0-7574-47ad-945b-5d052a554dd5 382 0 2024-07-25 17:37:19 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-25 17:37:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-659ab5f8-6729-4a21-ba13-5217cb1131df &PersistentVolumeClaim{ObjectMeta:{myclaim  default  659ab5f8-6729-4a21-ba13-5217cb1131df 715 0 2024-07-25 17:39:21 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-25 17:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-25 17:39:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0725 17:39:21.475046       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-659ab5f8-6729-4a21-ba13-5217cb1131df" provisioned
	I0725 17:39:21.475057       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0725 17:39:21.475060       1 volume_store.go:212] Trying to save persistentvolume "pvc-659ab5f8-6729-4a21-ba13-5217cb1131df"
	I0725 17:39:21.475277       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"659ab5f8-6729-4a21-ba13-5217cb1131df", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0725 17:39:21.478812       1 volume_store.go:219] persistentvolume "pvc-659ab5f8-6729-4a21-ba13-5217cb1131df" saved
	I0725 17:39:21.479002       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"659ab5f8-6729-4a21-ba13-5217cb1131df", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-659ab5f8-6729-4a21-ba13-5217cb1131df
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-963000 -n functional-963000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-963000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-b5fc48f67-hzq98 kubernetes-dashboard-779776cb65-7tsl7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-963000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hzq98 kubernetes-dashboard-779776cb65-7tsl7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-963000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hzq98 kubernetes-dashboard-779776cb65-7tsl7: exit status 1 (41.97225ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-963000/192.168.105.4
	Start Time:       Thu, 25 Jul 2024 10:39:49 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://3f0eaeb3345f990c2297e0f3895e8e34a78fd79d7b3a08e39cbe9e6f126c987d
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 25 Jul 2024 10:39:51 -0700
	      Finished:     Thu, 25 Jul 2024 10:39:51 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sdgw8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-sdgw8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  8s    default-scheduler  Successfully assigned default/busybox-mount to functional-963000
	  Normal  Pulling    9s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     8s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.048s (1.048s including waiting). Image size: 3547125 bytes.
	  Normal  Created    7s    kubelet            Created container mount-munger
	  Normal  Started    7s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-b5fc48f67-hzq98" not found
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-7tsl7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-963000 describe pod busybox-mount dashboard-metrics-scraper-b5fc48f67-hzq98 kubernetes-dashboard-779776cb65-7tsl7: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-603000 node stop m02 -v=7 --alsologtostderr: (12.191869291s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
E0725 10:45:37.230657    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:46:59.150784    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:47:10.266483    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (2m55.969750375s)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-603000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-603000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:45:20.299067    3062 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:45:20.299222    3062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:45:20.299226    3062 out.go:304] Setting ErrFile to fd 2...
	I0725 10:45:20.299228    3062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:45:20.299369    3062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:45:20.299500    3062 out.go:298] Setting JSON to false
	I0725 10:45:20.299513    3062 mustload.go:65] Loading cluster: ha-603000
	I0725 10:45:20.299550    3062 notify.go:220] Checking for updates...
	I0725 10:45:20.299737    3062 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:45:20.299744    3062 status.go:255] checking status of ha-603000 ...
	I0725 10:45:20.300412    3062 status.go:330] ha-603000 host status = "Running" (err=<nil>)
	I0725 10:45:20.300420    3062 host.go:66] Checking if "ha-603000" exists ...
	I0725 10:45:20.300521    3062 host.go:66] Checking if "ha-603000" exists ...
	I0725 10:45:20.300639    3062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:45:20.300647    3062 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/id_rsa Username:docker}
	W0725 10:45:46.227209    3062 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0725 10:45:46.227371    3062 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0725 10:45:46.227392    3062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0725 10:45:46.227402    3062 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:45:46.227423    3062 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0725 10:45:46.227438    3062 status.go:255] checking status of ha-603000-m02 ...
	I0725 10:45:46.227776    3062 status.go:330] ha-603000-m02 host status = "Stopped" (err=<nil>)
	I0725 10:45:46.227788    3062 status.go:343] host is not running, skipping remaining checks
	I0725 10:45:46.227791    3062 status.go:257] ha-603000-m02 status: &{Name:ha-603000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:45:46.227797    3062 status.go:255] checking status of ha-603000-m03 ...
	I0725 10:45:46.229720    3062 status.go:330] ha-603000-m03 host status = "Running" (err=<nil>)
	I0725 10:45:46.229730    3062 host.go:66] Checking if "ha-603000-m03" exists ...
	I0725 10:45:46.229872    3062 host.go:66] Checking if "ha-603000-m03" exists ...
	I0725 10:45:46.230002    3062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:45:46.230012    3062 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m03/id_rsa Username:docker}
	W0725 10:47:01.230104    3062 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0725 10:47:01.230171    3062 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0725 10:47:01.230188    3062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0725 10:47:01.230192    3062 status.go:257] ha-603000-m03 status: &{Name:ha-603000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:47:01.230201    3062 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0725 10:47:01.230206    3062 status.go:255] checking status of ha-603000-m04 ...
	I0725 10:47:01.230952    3062 status.go:330] ha-603000-m04 host status = "Running" (err=<nil>)
	I0725 10:47:01.230962    3062 host.go:66] Checking if "ha-603000-m04" exists ...
	I0725 10:47:01.231050    3062 host.go:66] Checking if "ha-603000-m04" exists ...
	I0725 10:47:01.231165    3062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:47:01.231171    3062 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m04/id_rsa Username:docker}
	W0725 10:48:16.230921    3062 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0725 10:48:16.230973    3062 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0725 10:48:16.230983    3062 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0725 10:48:16.230987    3062 status.go:257] ha-603000-m04 status: &{Name:ha-603000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:48:16.230995    3062 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-603000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 3 (25.959405042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 10:48:42.190077    3090 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0725 10:48:42.190085    3090 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0725 10:49:15.285064    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:49:42.988739    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m17.281448208s)
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 3 (25.964527709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 10:50:25.429847    3122 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0725 10:50:25.429892    3122 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (103.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.109800625s)

                                                
                                                
-- stdout --
	* Starting "ha-603000-m02" control-plane node in "ha-603000" cluster
	* Restarting existing qemu2 VM for "ha-603000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-603000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:50:25.500388    3128 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:50:25.500702    3128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:50:25.500707    3128 out.go:304] Setting ErrFile to fd 2...
	I0725 10:50:25.500710    3128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:50:25.500887    3128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:50:25.501200    3128 mustload.go:65] Loading cluster: ha-603000
	I0725 10:50:25.501501    3128 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0725 10:50:25.501802    3128 host.go:58] "ha-603000-m02" host status: Stopped
	I0725 10:50:25.505303    3128 out.go:177] * Starting "ha-603000-m02" control-plane node in "ha-603000" cluster
	I0725 10:50:25.508275    3128 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:50:25.508290    3128 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 10:50:25.508300    3128 cache.go:56] Caching tarball of preloaded images
	I0725 10:50:25.508367    3128 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 10:50:25.508374    3128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:50:25.508441    3128 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/ha-603000/config.json ...
	I0725 10:50:25.508854    3128 start.go:360] acquireMachinesLock for ha-603000-m02: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:50:25.508904    3128 start.go:364] duration metric: took 35µs to acquireMachinesLock for "ha-603000-m02"
	I0725 10:50:25.508915    3128 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:50:25.508922    3128 fix.go:54] fixHost starting: m02
	I0725 10:50:25.509099    3128 fix.go:112] recreateIfNeeded on ha-603000-m02: state=Stopped err=<nil>
	W0725 10:50:25.509106    3128 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:50:25.512191    3128 out.go:177] * Restarting existing qemu2 VM for "ha-603000-m02" ...
	I0725 10:50:25.516208    3128 qemu.go:418] Using hvf for hardware acceleration
	I0725 10:50:25.516263    3128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:4f:b5:33:2b:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/disk.qcow2
	I0725 10:50:25.519190    3128 main.go:141] libmachine: STDOUT: 
	I0725 10:50:25.519215    3128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 10:50:25.519247    3128 fix.go:56] duration metric: took 10.3245ms for fixHost
	I0725 10:50:25.519253    3128 start.go:83] releasing machines lock for "ha-603000-m02", held for 10.343709ms
	W0725 10:50:25.519262    3128 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 10:50:25.519301    3128 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 10:50:25.519307    3128 start.go:729] Will try again in 5 seconds ...
	I0725 10:50:30.521252    3128 start.go:360] acquireMachinesLock for ha-603000-m02: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:50:30.521388    3128 start.go:364] duration metric: took 94.709µs to acquireMachinesLock for "ha-603000-m02"
	I0725 10:50:30.521419    3128 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:50:30.521423    3128 fix.go:54] fixHost starting: m02
	I0725 10:50:30.521582    3128 fix.go:112] recreateIfNeeded on ha-603000-m02: state=Stopped err=<nil>
	W0725 10:50:30.521589    3128 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:50:30.522913    3128 out.go:177] * Restarting existing qemu2 VM for "ha-603000-m02" ...
	I0725 10:50:30.526539    3128 qemu.go:418] Using hvf for hardware acceleration
	I0725 10:50:30.526573    3128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:4f:b5:33:2b:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/disk.qcow2
	I0725 10:50:30.528678    3128 main.go:141] libmachine: STDOUT: 
	I0725 10:50:30.528694    3128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 10:50:30.528719    3128 fix.go:56] duration metric: took 7.296458ms for fixHost
	I0725 10:50:30.528724    3128 start.go:83] releasing machines lock for "ha-603000-m02", held for 7.331417ms
	W0725 10:50:30.528766    3128 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 10:50:30.532586    3128 out.go:177] 
	W0725 10:50:30.536563    3128 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 10:50:30.536570    3128 out.go:239] * 
	* 
	W0725 10:50:30.538143    3128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:50:30.541666    3128 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0725 10:50:25.500388    3128 out.go:291] Setting OutFile to fd 1 ...
I0725 10:50:25.500702    3128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:50:25.500707    3128 out.go:304] Setting ErrFile to fd 2...
I0725 10:50:25.500710    3128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:50:25.500887    3128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:50:25.501200    3128 mustload.go:65] Loading cluster: ha-603000
I0725 10:50:25.501501    3128 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0725 10:50:25.501802    3128 host.go:58] "ha-603000-m02" host status: Stopped
I0725 10:50:25.505303    3128 out.go:177] * Starting "ha-603000-m02" control-plane node in "ha-603000" cluster
I0725 10:50:25.508275    3128 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0725 10:50:25.508290    3128 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0725 10:50:25.508300    3128 cache.go:56] Caching tarball of preloaded images
I0725 10:50:25.508367    3128 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0725 10:50:25.508374    3128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0725 10:50:25.508441    3128 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/ha-603000/config.json ...
I0725 10:50:25.508854    3128 start.go:360] acquireMachinesLock for ha-603000-m02: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0725 10:50:25.508904    3128 start.go:364] duration metric: took 35µs to acquireMachinesLock for "ha-603000-m02"
I0725 10:50:25.508915    3128 start.go:96] Skipping create...Using existing machine configuration
I0725 10:50:25.508922    3128 fix.go:54] fixHost starting: m02
I0725 10:50:25.509099    3128 fix.go:112] recreateIfNeeded on ha-603000-m02: state=Stopped err=<nil>
W0725 10:50:25.509106    3128 fix.go:138] unexpected machine state, will restart: <nil>
I0725 10:50:25.512191    3128 out.go:177] * Restarting existing qemu2 VM for "ha-603000-m02" ...
I0725 10:50:25.516208    3128 qemu.go:418] Using hvf for hardware acceleration
I0725 10:50:25.516263    3128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:4f:b5:33:2b:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/disk.qcow2
I0725 10:50:25.519190    3128 main.go:141] libmachine: STDOUT: 
I0725 10:50:25.519215    3128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0725 10:50:25.519247    3128 fix.go:56] duration metric: took 10.3245ms for fixHost
I0725 10:50:25.519253    3128 start.go:83] releasing machines lock for "ha-603000-m02", held for 10.343709ms
W0725 10:50:25.519262    3128 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0725 10:50:25.519301    3128 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0725 10:50:25.519307    3128 start.go:729] Will try again in 5 seconds ...
I0725 10:50:30.521252    3128 start.go:360] acquireMachinesLock for ha-603000-m02: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0725 10:50:30.521388    3128 start.go:364] duration metric: took 94.709µs to acquireMachinesLock for "ha-603000-m02"
I0725 10:50:30.521419    3128 start.go:96] Skipping create...Using existing machine configuration
I0725 10:50:30.521423    3128 fix.go:54] fixHost starting: m02
I0725 10:50:30.521582    3128 fix.go:112] recreateIfNeeded on ha-603000-m02: state=Stopped err=<nil>
W0725 10:50:30.521589    3128 fix.go:138] unexpected machine state, will restart: <nil>
I0725 10:50:30.522913    3128 out.go:177] * Restarting existing qemu2 VM for "ha-603000-m02" ...
I0725 10:50:30.526539    3128 qemu.go:418] Using hvf for hardware acceleration
I0725 10:50:30.526573    3128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:4f:b5:33:2b:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m02/disk.qcow2
I0725 10:50:30.528678    3128 main.go:141] libmachine: STDOUT: 
I0725 10:50:30.528694    3128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0725 10:50:30.528719    3128 fix.go:56] duration metric: took 7.296458ms for fixHost
I0725 10:50:30.528724    3128 start.go:83] releasing machines lock for "ha-603000-m02", held for 7.331417ms
W0725 10:50:30.528766    3128 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0725 10:50:30.532586    3128 out.go:177] 
W0725 10:50:30.536563    3128 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0725 10:50:30.536570    3128 out.go:239] * 
* 
W0725 10:50:30.538143    3128 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0725 10:50:30.541666    3128 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-603000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
E0725 10:52:10.258826    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (2m57.992749917s)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-603000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-603000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:50:30.575538    3134 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:50:30.575696    3134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:50:30.575700    3134 out.go:304] Setting ErrFile to fd 2...
	I0725 10:50:30.575705    3134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:50:30.575837    3134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:50:30.575960    3134 out.go:298] Setting JSON to false
	I0725 10:50:30.575971    3134 mustload.go:65] Loading cluster: ha-603000
	I0725 10:50:30.576013    3134 notify.go:220] Checking for updates...
	I0725 10:50:30.576189    3134 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:50:30.576196    3134 status.go:255] checking status of ha-603000 ...
	I0725 10:50:30.576870    3134 status.go:330] ha-603000 host status = "Running" (err=<nil>)
	I0725 10:50:30.576879    3134 host.go:66] Checking if "ha-603000" exists ...
	I0725 10:50:30.576974    3134 host.go:66] Checking if "ha-603000" exists ...
	I0725 10:50:30.577081    3134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:50:30.577089    3134 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/id_rsa Username:docker}
	W0725 10:50:30.577286    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0725 10:50:30.577302    3134 retry.go:31] will retry after 139.301859ms: dial tcp 192.168.105.5:22: connect: host is down
	W0725 10:50:30.718729    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0725 10:50:30.718744    3134 retry.go:31] will retry after 394.745299ms: dial tcp 192.168.105.5:22: connect: host is down
	W0725 10:50:31.115643    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0725 10:50:31.115664    3134 retry.go:31] will retry after 765.177335ms: dial tcp 192.168.105.5:22: connect: host is down
	W0725 10:50:31.882986    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0725 10:50:31.883005    3134 retry.go:31] will retry after 672.629398ms: dial tcp 192.168.105.5:22: connect: host is down
	W0725 10:50:58.529467    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0725 10:50:58.529519    3134 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0725 10:50:58.529527    3134 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0725 10:50:58.529542    3134 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:50:58.529552    3134 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0725 10:50:58.529556    3134 status.go:255] checking status of ha-603000-m02 ...
	I0725 10:50:58.529762    3134 status.go:330] ha-603000-m02 host status = "Stopped" (err=<nil>)
	I0725 10:50:58.529768    3134 status.go:343] host is not running, skipping remaining checks
	I0725 10:50:58.529770    3134 status.go:257] ha-603000-m02 status: &{Name:ha-603000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:50:58.529774    3134 status.go:255] checking status of ha-603000-m03 ...
	I0725 10:50:58.530420    3134 status.go:330] ha-603000-m03 host status = "Running" (err=<nil>)
	I0725 10:50:58.530427    3134 host.go:66] Checking if "ha-603000-m03" exists ...
	I0725 10:50:58.530544    3134 host.go:66] Checking if "ha-603000-m03" exists ...
	I0725 10:50:58.530677    3134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:50:58.530683    3134 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m03/id_rsa Username:docker}
	W0725 10:52:13.530840    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0725 10:52:13.530919    3134 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0725 10:52:13.530929    3134 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0725 10:52:13.530934    3134 status.go:257] ha-603000-m03 status: &{Name:ha-603000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:52:13.530943    3134 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0725 10:52:13.530947    3134 status.go:255] checking status of ha-603000-m04 ...
	I0725 10:52:13.531670    3134 status.go:330] ha-603000-m04 host status = "Running" (err=<nil>)
	I0725 10:52:13.531677    3134 host.go:66] Checking if "ha-603000-m04" exists ...
	I0725 10:52:13.531778    3134 host.go:66] Checking if "ha-603000-m04" exists ...
	I0725 10:52:13.531902    3134 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 10:52:13.531916    3134 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000-m04/id_rsa Username:docker}
	W0725 10:53:28.532226    3134 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0725 10:53:28.532301    3134 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0725 10:53:28.532310    3134 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0725 10:53:28.532315    3134 status.go:257] ha-603000-m04 status: &{Name:ha-603000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0725 10:53:28.532325    3134 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
E0725 10:53:33.325796    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 3 (25.955884s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 10:53:54.488091    3184 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0725 10:53:54.488104    3184 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-603000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-603000 -v=7 --alsologtostderr
E0725 10:57:10.251071    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-603000 -v=7 --alsologtostderr: (3m49.012281167s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.242103083s)

                                                
                                                
-- stdout --
	* [ha-603000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:59:02.331568    3326 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:59:02.331751    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:02.331756    3326 out.go:304] Setting ErrFile to fd 2...
	I0725 10:59:02.331760    3326 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:02.331916    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:59:02.333281    3326 out.go:298] Setting JSON to false
	I0725 10:59:02.354004    3326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3506,"bootTime":1721926836,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:59:02.354078    3326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:59:02.358378    3326 out.go:177] * [ha-603000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:59:02.366216    3326 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:59:02.366249    3326 notify.go:220] Checking for updates...
	I0725 10:59:02.373214    3326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:59:02.380130    3326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:59:02.388130    3326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:59:02.392157    3326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 10:59:02.400208    3326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:59:02.404568    3326 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:59:02.404621    3326 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:59:02.408977    3326 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 10:59:02.417209    3326 start.go:297] selected driver: qemu2
	I0725 10:59:02.417216    3326 start.go:901] validating driver "qemu2" against &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-603000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:59:02.417296    3326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:59:02.420391    3326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 10:59:02.420443    3326 cni.go:84] Creating CNI manager for ""
	I0725 10:59:02.420452    3326 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 10:59:02.420512    3326 start.go:340] cluster config:
	{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-603000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:59:02.425158    3326 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:59:02.434170    3326 out.go:177] * Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	I0725 10:59:02.438225    3326 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:59:02.438254    3326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 10:59:02.438264    3326 cache.go:56] Caching tarball of preloaded images
	I0725 10:59:02.438345    3326 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 10:59:02.438352    3326 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:59:02.438436    3326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/ha-603000/config.json ...
	I0725 10:59:02.438971    3326 start.go:360] acquireMachinesLock for ha-603000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:59:02.439017    3326 start.go:364] duration metric: took 38.375µs to acquireMachinesLock for "ha-603000"
	I0725 10:59:02.439029    3326 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:59:02.439035    3326 fix.go:54] fixHost starting: 
	I0725 10:59:02.439180    3326 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0725 10:59:02.439190    3326 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:59:02.444188    3326 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0725 10:59:02.452231    3326 qemu.go:418] Using hvf for hardware acceleration
	I0725 10:59:02.452277    3326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ac:d5:d3:0f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/disk.qcow2
	I0725 10:59:02.454674    3326 main.go:141] libmachine: STDOUT: 
	I0725 10:59:02.454698    3326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 10:59:02.454740    3326 fix.go:56] duration metric: took 15.70475ms for fixHost
	I0725 10:59:02.454745    3326 start.go:83] releasing machines lock for "ha-603000", held for 15.723667ms
	W0725 10:59:02.454754    3326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 10:59:02.454799    3326 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 10:59:02.454805    3326 start.go:729] Will try again in 5 seconds ...
	I0725 10:59:07.456909    3326 start.go:360] acquireMachinesLock for ha-603000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 10:59:07.457273    3326 start.go:364] duration metric: took 270.708µs to acquireMachinesLock for "ha-603000"
	I0725 10:59:07.457389    3326 start.go:96] Skipping create...Using existing machine configuration
	I0725 10:59:07.457411    3326 fix.go:54] fixHost starting: 
	I0725 10:59:07.458078    3326 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0725 10:59:07.458105    3326 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 10:59:07.466384    3326 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0725 10:59:07.469507    3326 qemu.go:418] Using hvf for hardware acceleration
	I0725 10:59:07.469759    3326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ac:d5:d3:0f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/disk.qcow2
	I0725 10:59:07.478616    3326 main.go:141] libmachine: STDOUT: 
	I0725 10:59:07.478684    3326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 10:59:07.478768    3326 fix.go:56] duration metric: took 21.35725ms for fixHost
	I0725 10:59:07.478788    3326 start.go:83] releasing machines lock for "ha-603000", held for 21.49025ms
	W0725 10:59:07.478970    3326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 10:59:07.487346    3326 out.go:177] 
	W0725 10:59:07.491450    3326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 10:59:07.491507    3326 out.go:239] * 
	* 
	W0725 10:59:07.494137    3326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:59:07.504363    3326 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-603000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-603000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (32.77225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.648208ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-603000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:59:07.643135    3340 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:59:07.643356    3340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:07.643359    3340 out.go:304] Setting ErrFile to fd 2...
	I0725 10:59:07.643361    3340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:07.643479    3340 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:59:07.643702    3340 mustload.go:65] Loading cluster: ha-603000
	I0725 10:59:07.643928    3340 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0725 10:59:07.644240    3340 out.go:239] ! The control-plane node ha-603000 host is not running (will try others): state=Stopped
	! The control-plane node ha-603000 host is not running (will try others): state=Stopped
	W0725 10:59:07.644351    3340 out.go:239] ! The control-plane node ha-603000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-603000-m02 host is not running (will try others): state=Stopped
	I0725 10:59:07.649186    3340 out.go:177] * The control-plane node ha-603000-m03 host is not running: state=Stopped
	I0725 10:59:07.652148    3340 out.go:177]   To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-603000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (29.314625ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:59:07.683307    3342 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:59:07.683497    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:07.683500    3342 out.go:304] Setting ErrFile to fd 2...
	I0725 10:59:07.683502    3342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:59:07.683611    3342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:59:07.683737    3342 out.go:298] Setting JSON to false
	I0725 10:59:07.683747    3342 mustload.go:65] Loading cluster: ha-603000
	I0725 10:59:07.683815    3342 notify.go:220] Checking for updates...
	I0725 10:59:07.683957    3342 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:59:07.683964    3342 status.go:255] checking status of ha-603000 ...
	I0725 10:59:07.684166    3342 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0725 10:59:07.684170    3342 status.go:343] host is not running, skipping remaining checks
	I0725 10:59:07.684172    3342 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:59:07.684181    3342 status.go:255] checking status of ha-603000-m02 ...
	I0725 10:59:07.684269    3342 status.go:330] ha-603000-m02 host status = "Stopped" (err=<nil>)
	I0725 10:59:07.684272    3342 status.go:343] host is not running, skipping remaining checks
	I0725 10:59:07.684273    3342 status.go:257] ha-603000-m02 status: &{Name:ha-603000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:59:07.684277    3342 status.go:255] checking status of ha-603000-m03 ...
	I0725 10:59:07.684360    3342 status.go:330] ha-603000-m03 host status = "Stopped" (err=<nil>)
	I0725 10:59:07.684363    3342 status.go:343] host is not running, skipping remaining checks
	I0725 10:59:07.684368    3342 status.go:257] ha-603000-m03 status: &{Name:ha-603000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 10:59:07.684371    3342 status.go:255] checking status of ha-603000-m04 ...
	I0725 10:59:07.684464    3342 status.go:330] ha-603000-m04 host status = "Stopped" (err=<nil>)
	I0725 10:59:07.684469    3342 status.go:343] host is not running, skipping remaining checks
	I0725 10:59:07.684470    3342 status.go:257] ha-603000-m04 status: &{Name:ha-603000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (28.8385ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (47.846666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 stop -v=7 --alsologtostderr
E0725 10:59:15.270099    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 11:00:38.317744    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 11:02:10.226006    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-603000 stop -v=7 --alsologtostderr: (3m21.984425959s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr: exit status 7 (68.337292ms)

                                                
                                                
-- stdout --
	ha-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-603000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:02:30.756357    3820 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:02:30.756582    3820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:30.756587    3820 out.go:304] Setting ErrFile to fd 2...
	I0725 11:02:30.756597    3820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:30.756772    3820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:02:30.756942    3820 out.go:298] Setting JSON to false
	I0725 11:02:30.756955    3820 mustload.go:65] Loading cluster: ha-603000
	I0725 11:02:30.756993    3820 notify.go:220] Checking for updates...
	I0725 11:02:30.757268    3820 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:02:30.757276    3820 status.go:255] checking status of ha-603000 ...
	I0725 11:02:30.757540    3820 status.go:330] ha-603000 host status = "Stopped" (err=<nil>)
	I0725 11:02:30.757545    3820 status.go:343] host is not running, skipping remaining checks
	I0725 11:02:30.757547    3820 status.go:257] ha-603000 status: &{Name:ha-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:02:30.757560    3820 status.go:255] checking status of ha-603000-m02 ...
	I0725 11:02:30.757691    3820 status.go:330] ha-603000-m02 host status = "Stopped" (err=<nil>)
	I0725 11:02:30.757695    3820 status.go:343] host is not running, skipping remaining checks
	I0725 11:02:30.757698    3820 status.go:257] ha-603000-m02 status: &{Name:ha-603000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:02:30.757703    3820 status.go:255] checking status of ha-603000-m03 ...
	I0725 11:02:30.757842    3820 status.go:330] ha-603000-m03 host status = "Stopped" (err=<nil>)
	I0725 11:02:30.757847    3820 status.go:343] host is not running, skipping remaining checks
	I0725 11:02:30.757849    3820 status.go:257] ha-603000-m03 status: &{Name:ha-603000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 11:02:30.757854    3820 status.go:255] checking status of ha-603000-m04 ...
	I0725 11:02:30.757977    3820 status.go:330] ha-603000-m04 host status = "Stopped" (err=<nil>)
	I0725 11:02:30.757981    3820 status.go:343] host is not running, skipping remaining checks
	I0725 11:02:30.757984    3820 status.go:257] ha-603000-m04 status: &{Name:ha-603000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr": ha-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-603000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (32.217917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180697417s)

                                                
                                                
-- stdout --
	* [ha-603000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:02:30.820273    3824 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:02:30.820417    3824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:30.820420    3824 out.go:304] Setting ErrFile to fd 2...
	I0725 11:02:30.820422    3824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:30.820555    3824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:02:30.821799    3824 out.go:298] Setting JSON to false
	I0725 11:02:30.839288    3824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3714,"bootTime":1721926836,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:02:30.839365    3824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:02:30.843586    3824 out.go:177] * [ha-603000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:02:30.850581    3824 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:02:30.850617    3824 notify.go:220] Checking for updates...
	I0725 11:02:30.856563    3824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:02:30.859587    3824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:02:30.862569    3824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:02:30.865546    3824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:02:30.868571    3824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:02:30.870194    3824 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:02:30.870430    3824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:02:30.874517    3824 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:02:30.881430    3824 start.go:297] selected driver: qemu2
	I0725 11:02:30.881445    3824 start.go:901] validating driver "qemu2" against &{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-603000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:02:30.881532    3824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:02:30.883973    3824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:02:30.884014    3824 cni.go:84] Creating CNI manager for ""
	I0725 11:02:30.884019    3824 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0725 11:02:30.884069    3824 start.go:340] cluster config:
	{Name:ha-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-603000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:02:30.887490    3824 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:02:30.895542    3824 out.go:177] * Starting "ha-603000" primary control-plane node in "ha-603000" cluster
	I0725 11:02:30.899458    3824 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:02:30.899476    3824 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:02:30.899483    3824 cache.go:56] Caching tarball of preloaded images
	I0725 11:02:30.899546    3824 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:02:30.899551    3824 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:02:30.899609    3824 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/ha-603000/config.json ...
	I0725 11:02:30.899929    3824 start.go:360] acquireMachinesLock for ha-603000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:02:30.899963    3824 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "ha-603000"
	I0725 11:02:30.899973    3824 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:02:30.899978    3824 fix.go:54] fixHost starting: 
	I0725 11:02:30.900081    3824 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0725 11:02:30.900091    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:02:30.904537    3824 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0725 11:02:30.912612    3824 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:02:30.912651    3824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ac:d5:d3:0f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/disk.qcow2
	I0725 11:02:30.914746    3824 main.go:141] libmachine: STDOUT: 
	I0725 11:02:30.914765    3824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:02:30.914794    3824 fix.go:56] duration metric: took 14.815667ms for fixHost
	I0725 11:02:30.914798    3824 start.go:83] releasing machines lock for "ha-603000", held for 14.831042ms
	W0725 11:02:30.914805    3824 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:02:30.914841    3824 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:02:30.914845    3824 start.go:729] Will try again in 5 seconds ...
	I0725 11:02:35.915904    3824 start.go:360] acquireMachinesLock for ha-603000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:02:35.916430    3824 start.go:364] duration metric: took 371.75µs to acquireMachinesLock for "ha-603000"
	I0725 11:02:35.916594    3824 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:02:35.916617    3824 fix.go:54] fixHost starting: 
	I0725 11:02:35.917373    3824 fix.go:112] recreateIfNeeded on ha-603000: state=Stopped err=<nil>
	W0725 11:02:35.917403    3824 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:02:35.925746    3824 out.go:177] * Restarting existing qemu2 VM for "ha-603000" ...
	I0725 11:02:35.929826    3824 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:02:35.930028    3824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:ac:d5:d3:0f:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/ha-603000/disk.qcow2
	I0725 11:02:35.938938    3824 main.go:141] libmachine: STDOUT: 
	I0725 11:02:35.939012    3824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:02:35.939096    3824 fix.go:56] duration metric: took 22.480792ms for fixHost
	I0725 11:02:35.939120    3824 start.go:83] releasing machines lock for "ha-603000", held for 22.642541ms
	W0725 11:02:35.939328    3824 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:02:35.946842    3824 out.go:177] 
	W0725 11:02:35.950641    3824 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:02:35.950664    3824 out.go:239] * 
	* 
	W0725 11:02:35.953390    3824 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:02:35.963719    3824 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-603000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (67.068708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-603000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-603000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (29.785542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr: exit status 83 (40.735541ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-603000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:02:36.148404    3839 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:02:36.148766    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:36.148769    3839 out.go:304] Setting ErrFile to fd 2...
	I0725 11:02:36.148772    3839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:02:36.148952    3839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:02:36.149183    3839 mustload.go:65] Loading cluster: ha-603000
	I0725 11:02:36.149408    3839 config.go:182] Loaded profile config "ha-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0725 11:02:36.149720    3839 out.go:239] ! The control-plane node ha-603000 host is not running (will try others): state=Stopped
	! The control-plane node ha-603000 host is not running (will try others): state=Stopped
	W0725 11:02:36.149822    3839 out.go:239] ! The control-plane node ha-603000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-603000-m02 host is not running (will try others): state=Stopped
	I0725 11:02:36.152782    3839 out.go:177] * The control-plane node ha-603000-m03 host is not running: state=Stopped
	I0725 11:02:36.156779    3839 out.go:177]   To start a cluster, run: "minikube start -p ha-603000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-603000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-603000 -n ha-603000: exit status 7 (29.588625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-927000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-927000 --driver=qemu2 : exit status 80 (9.838565s)

                                                
                                                
-- stdout --
	* [image-927000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-927000" primary control-plane node in "image-927000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-927000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-927000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-927000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-927000 -n image-927000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-927000 -n image-927000: exit status 7 (68.473083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-927000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (10.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-180000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-180000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (10.042504917s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2714c923-66be-4f1c-b60b-06a0b608cc88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-180000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"50d56cbb-5c3f-4ea1-a504-dbe87136a3ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"ed953213-037d-457f-acb4-396f8e726cb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig"}}
	{"specversion":"1.0","id":"92f0aa46-67ef-4d34-88a9-c6b113fa7d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2d7de4f3-9158-4fcf-98c0-c813400174dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7bc6dcd-7e0b-4de9-a35f-6c99adba0917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube"}}
	{"specversion":"1.0","id":"097a092f-804c-4a56-9e02-51e1af3f33f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bbcfd2f7-5fbe-4a89-981a-c8c481bd59c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9af23f3-42f6-4892-afb6-9d91f4f6b08e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a3ac0dad-b286-4e56-9f6d-5add277637e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-180000\" primary control-plane node in \"json-output-180000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0028e17-574d-4b12-87f2-04f40f9c6fa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b3267bf0-2c34-4003-8b55-31643a6703a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-180000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"32226930-495d-4f9e-9f20-1ef0fd28cf81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ea80a7d4-0f66-4245-bfbc-77b68e6314fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"65f37e11-bbc0-4f7c-af4a-2073d9550dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-180000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"c71fb9ef-6b7e-412a-b4ac-77af387f50de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"4bf373b9-75aa-40b0-8ee7-7712f2581427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-180000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (10.04s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-180000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-180000 --output=json --user=testUser: exit status 83 (74.921833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4000d243-4009-413b-8722-9b17d5aa8ec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-180000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"609e24de-76de-4654-80e9-c31ce6daeda4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-180000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-180000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-180000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-180000 --output=json --user=testUser: exit status 83 (44.348166ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-180000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-180000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-180000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-180000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-988000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-988000 --driver=qemu2 : exit status 80 (9.808352s)

                                                
                                                
-- stdout --
	* [first-988000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-988000" primary control-plane node in "first-988000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-988000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-988000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-988000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-25 11:03:08.859691 -0700 PDT m=+2114.783524084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-989000 -n second-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-989000 -n second-989000: exit status 85 (77.197375ms)

                                                
                                                
-- stdout --
	* Profile "second-989000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-989000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-989000" host is not running, skipping log retrieval (state="* Profile \"second-989000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-989000\"")
helpers_test.go:175: Cleaning up "second-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-989000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-25 11:03:09.043018 -0700 PDT m=+2114.966855876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-988000 -n first-988000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-988000 -n first-988000: exit status 7 (30.87875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-988000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-988000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-988000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-082000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-082000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.163038708s)

                                                
                                                
-- stdout --
	* [mount-start-1-082000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-082000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-082000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-082000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-082000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-082000 -n mount-start-1-082000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-082000 -n mount-start-1-082000: exit status 7 (67.023666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-082000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-638000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-638000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.999678708s)

                                                
                                                
-- stdout --
	* [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-638000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:03:19.590266    3987 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:03:19.590378    3987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:03:19.590382    3987 out.go:304] Setting ErrFile to fd 2...
	I0725 11:03:19.590385    3987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:03:19.590539    3987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:03:19.591622    3987 out.go:298] Setting JSON to false
	I0725 11:03:19.607428    3987 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3763,"bootTime":1721926836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:03:19.607504    3987 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:03:19.613744    3987 out.go:177] * [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:03:19.620639    3987 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:03:19.620702    3987 notify.go:220] Checking for updates...
	I0725 11:03:19.626014    3987 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:03:19.628681    3987 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:03:19.631662    3987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:03:19.634714    3987 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:03:19.637746    3987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:03:19.640926    3987 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:03:19.645639    3987 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:03:19.652674    3987 start.go:297] selected driver: qemu2
	I0725 11:03:19.652682    3987 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:03:19.652690    3987 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:03:19.654790    3987 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:03:19.657667    3987 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:03:19.660740    3987 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:03:19.660782    3987 cni.go:84] Creating CNI manager for ""
	I0725 11:03:19.660788    3987 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0725 11:03:19.660798    3987 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 11:03:19.660835    3987 start.go:340] cluster config:
	{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:03:19.664489    3987 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:03:19.671535    3987 out.go:177] * Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	I0725 11:03:19.675663    3987 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:03:19.675679    3987 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:03:19.675688    3987 cache.go:56] Caching tarball of preloaded images
	I0725 11:03:19.675753    3987 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:03:19.675761    3987 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:03:19.675979    3987 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/multinode-638000/config.json ...
	I0725 11:03:19.675990    3987 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/multinode-638000/config.json: {Name:mk5ab0c5044610fd1b9bb04a0e65056df4c9c763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:03:19.676374    3987 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:03:19.676412    3987 start.go:364] duration metric: took 31.084µs to acquireMachinesLock for "multinode-638000"
	I0725 11:03:19.676425    3987 start.go:93] Provisioning new machine with config: &{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:03:19.676456    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:03:19.684668    3987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:03:19.702478    3987 start.go:159] libmachine.API.Create for "multinode-638000" (driver="qemu2")
	I0725 11:03:19.702514    3987 client.go:168] LocalClient.Create starting
	I0725 11:03:19.702578    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:03:19.702608    3987 main.go:141] libmachine: Decoding PEM data...
	I0725 11:03:19.702618    3987 main.go:141] libmachine: Parsing certificate...
	I0725 11:03:19.702656    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:03:19.702680    3987 main.go:141] libmachine: Decoding PEM data...
	I0725 11:03:19.702690    3987 main.go:141] libmachine: Parsing certificate...
	I0725 11:03:19.703051    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:03:19.852258    3987 main.go:141] libmachine: Creating SSH key...
	I0725 11:03:20.164172    3987 main.go:141] libmachine: Creating Disk image...
	I0725 11:03:20.164183    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:03:20.164401    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:20.173889    3987 main.go:141] libmachine: STDOUT: 
	I0725 11:03:20.173905    3987 main.go:141] libmachine: STDERR: 
	I0725 11:03:20.173953    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2 +20000M
	I0725 11:03:20.181856    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:03:20.181868    3987 main.go:141] libmachine: STDERR: 
	I0725 11:03:20.181885    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:20.181890    3987 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:03:20.181901    3987 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:03:20.181929    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:40:f9:39:6b:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:20.183541    3987 main.go:141] libmachine: STDOUT: 
	I0725 11:03:20.183554    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:03:20.183573    3987 client.go:171] duration metric: took 481.068541ms to LocalClient.Create
	I0725 11:03:22.185718    3987 start.go:128] duration metric: took 2.509313s to createHost
	I0725 11:03:22.185962    3987 start.go:83] releasing machines lock for "multinode-638000", held for 2.509453541s
	W0725 11:03:22.186024    3987 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:03:22.193021    3987 out.go:177] * Deleting "multinode-638000" in qemu2 ...
	W0725 11:03:22.220072    3987 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:03:22.220100    3987 start.go:729] Will try again in 5 seconds ...
	I0725 11:03:27.222180    3987 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:03:27.222702    3987 start.go:364] duration metric: took 428.084µs to acquireMachinesLock for "multinode-638000"
	I0725 11:03:27.222848    3987 start.go:93] Provisioning new machine with config: &{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:03:27.223108    3987 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:03:27.228869    3987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:03:27.280949    3987 start.go:159] libmachine.API.Create for "multinode-638000" (driver="qemu2")
	I0725 11:03:27.280997    3987 client.go:168] LocalClient.Create starting
	I0725 11:03:27.281118    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:03:27.281178    3987 main.go:141] libmachine: Decoding PEM data...
	I0725 11:03:27.281195    3987 main.go:141] libmachine: Parsing certificate...
	I0725 11:03:27.281271    3987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:03:27.281315    3987 main.go:141] libmachine: Decoding PEM data...
	I0725 11:03:27.281328    3987 main.go:141] libmachine: Parsing certificate...
	I0725 11:03:27.281838    3987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:03:27.448570    3987 main.go:141] libmachine: Creating SSH key...
	I0725 11:03:27.494697    3987 main.go:141] libmachine: Creating Disk image...
	I0725 11:03:27.494702    3987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:03:27.494860    3987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:27.503990    3987 main.go:141] libmachine: STDOUT: 
	I0725 11:03:27.504080    3987 main.go:141] libmachine: STDERR: 
	I0725 11:03:27.504131    3987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2 +20000M
	I0725 11:03:27.511859    3987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:03:27.511917    3987 main.go:141] libmachine: STDERR: 
	I0725 11:03:27.511927    3987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:27.511937    3987 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:03:27.511944    3987 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:03:27.511975    3987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:eb:85:2b:3c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:03:27.513605    3987 main.go:141] libmachine: STDOUT: 
	I0725 11:03:27.513683    3987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:03:27.513699    3987 client.go:171] duration metric: took 232.702417ms to LocalClient.Create
	I0725 11:03:29.515863    3987 start.go:128] duration metric: took 2.292780917s to createHost
	I0725 11:03:29.515952    3987 start.go:83] releasing machines lock for "multinode-638000", held for 2.293281042s
	W0725 11:03:29.516484    3987 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:03:29.531014    3987 out.go:177] 
	W0725 11:03:29.534287    3987 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:03:29.534311    3987 out.go:239] * 
	* 
	W0725 11:03:29.536876    3987 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:03:29.548158    3987 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-638000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (65.201875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (115.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.592041ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-638000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- rollout status deployment/busybox: exit status 1 (57.822875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.557209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.705167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.601792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.083292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.283416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.291875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.146042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.33375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.320875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0725 11:04:15.244155    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.590291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.614042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.76825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.3635ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.354542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.209834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (29.750583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (115.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-638000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.473125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (29.83575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-638000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-638000 -v 3 --alsologtostderr: exit status 83 (41.742333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-638000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:25.257895    4096 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:25.258056    4096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.258059    4096 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:25.258061    4096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.258180    4096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:25.258398    4096 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:25.258583    4096 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:25.263125    4096 out.go:177] * The control-plane node multinode-638000 host is not running: state=Stopped
	I0725 11:05:25.267084    4096 out.go:177]   To start a cluster, run: "minikube start -p multinode-638000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-638000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (28.651459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-638000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-638000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.289375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-638000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-638000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-638000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (30.113041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-638000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-638000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-638000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-638000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (28.973417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status --output json --alsologtostderr: exit status 7 (29.1595ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-638000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:25.461809    4108 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:25.461963    4108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.461966    4108 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:25.461969    4108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.462105    4108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:25.462230    4108 out.go:298] Setting JSON to true
	I0725 11:05:25.462243    4108 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:25.462314    4108 notify.go:220] Checking for updates...
	I0725 11:05:25.462440    4108 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:25.462446    4108 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:25.462668    4108 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:25.462672    4108 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:25.462674    4108 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-638000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (28.745666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 node stop m03: exit status 85 (44.952ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-638000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status: exit status 7 (29.381958ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr: exit status 7 (29.7825ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:25.595633    4116 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:25.595781    4116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.595784    4116 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:25.595786    4116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.595906    4116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:25.596023    4116 out.go:298] Setting JSON to false
	I0725 11:05:25.596033    4116 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:25.596097    4116 notify.go:220] Checking for updates...
	I0725 11:05:25.596236    4116 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:25.596243    4116 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:25.596447    4116 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:25.596451    4116 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:25.596453    4116 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr": multinode-638000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (29.55925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (49.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 node start m03 -v=7 --alsologtostderr: exit status 85 (42.78075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:25.654458    4120 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:25.654684    4120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.654688    4120 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:25.654690    4120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.654810    4120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:25.655045    4120 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:25.655231    4120 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:25.660103    4120 out.go:177] 
	W0725 11:05:25.661190    4120 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0725 11:05:25.661195    4120 out.go:239] * 
	* 
	W0725 11:05:25.662849    4120 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:05:25.666057    4120 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0725 11:05:25.654458    4120 out.go:291] Setting OutFile to fd 1 ...
I0725 11:05:25.654684    4120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 11:05:25.654688    4120 out.go:304] Setting ErrFile to fd 2...
I0725 11:05:25.654690    4120 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 11:05:25.654810    4120 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 11:05:25.655045    4120 mustload.go:65] Loading cluster: multinode-638000
I0725 11:05:25.655231    4120 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 11:05:25.660103    4120 out.go:177] 
W0725 11:05:25.661190    4120 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0725 11:05:25.661195    4120 out.go:239] * 
* 
W0725 11:05:25.662849    4120 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0725 11:05:25.666057    4120 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-638000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (29.147542ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:25.697602    4122 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:25.697748    4122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.697751    4122 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:25.697753    4122 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:25.697891    4122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:25.697999    4122 out.go:298] Setting JSON to false
	I0725 11:05:25.698009    4122 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:25.698066    4122 notify.go:220] Checking for updates...
	I0725 11:05:25.698228    4122 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:25.698235    4122 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:25.698433    4122 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:25.698437    4122 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:25.698438    4122 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (71.489958ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:26.491993    4124 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:26.492207    4124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:26.492211    4124 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:26.492214    4124 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:26.492395    4124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:26.492553    4124 out.go:298] Setting JSON to false
	I0725 11:05:26.492566    4124 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:26.492605    4124 notify.go:220] Checking for updates...
	I0725 11:05:26.492825    4124 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:26.492833    4124 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:26.493101    4124 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:26.493106    4124 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:26.493109    4124 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (75.378292ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:28.475490    4126 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:28.475741    4126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:28.475746    4126 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:28.475750    4126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:28.475951    4126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:28.476121    4126 out.go:298] Setting JSON to false
	I0725 11:05:28.476135    4126 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:28.476186    4126 notify.go:220] Checking for updates...
	I0725 11:05:28.476414    4126 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:28.476423    4126 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:28.476723    4126 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:28.476728    4126 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:28.476731    4126 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (71.820708ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:31.563643    4128 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:31.563845    4128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:31.563850    4128 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:31.563853    4128 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:31.564046    4128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:31.564224    4128 out.go:298] Setting JSON to false
	I0725 11:05:31.564238    4128 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:31.564273    4128 notify.go:220] Checking for updates...
	I0725 11:05:31.564566    4128 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:31.564579    4128 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:31.564903    4128 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:31.564909    4128 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:31.564913    4128 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (72.527417ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:34.285544    4130 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:34.285748    4130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:34.285753    4130 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:34.285756    4130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:34.285968    4130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:34.286137    4130 out.go:298] Setting JSON to false
	I0725 11:05:34.286153    4130 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:34.286192    4130 notify.go:220] Checking for updates...
	I0725 11:05:34.286465    4130 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:34.286474    4130 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:34.286780    4130 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:34.286785    4130 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:34.286789    4130 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (70.820083ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:37.995993    4132 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:37.996218    4132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:37.996222    4132 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:37.996225    4132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:37.996386    4132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:37.996564    4132 out.go:298] Setting JSON to false
	I0725 11:05:37.996575    4132 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:37.996613    4132 notify.go:220] Checking for updates...
	I0725 11:05:37.996824    4132 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:37.996833    4132 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:37.997094    4132 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:37.997099    4132 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:37.997102    4132 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (72.840291ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:43.465376    4134 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:43.465599    4134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:43.465604    4134 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:43.465607    4134 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:43.465802    4134 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:43.466003    4134 out.go:298] Setting JSON to false
	I0725 11:05:43.466022    4134 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:43.466063    4134 notify.go:220] Checking for updates...
	I0725 11:05:43.466299    4134 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:43.466308    4134 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:43.466607    4134 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:43.466612    4134 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:43.466615    4134 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (71.239084ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:05:52.470188    4141 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:05:52.470492    4141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:52.470498    4141 out.go:304] Setting ErrFile to fd 2...
	I0725 11:05:52.470502    4141 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:05:52.470696    4141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:05:52.470869    4141 out.go:298] Setting JSON to false
	I0725 11:05:52.470891    4141 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:05:52.470941    4141 notify.go:220] Checking for updates...
	I0725 11:05:52.471199    4141 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:05:52.471208    4141 status.go:255] checking status of multinode-638000 ...
	I0725 11:05:52.471501    4141 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:05:52.471507    4141 status.go:343] host is not running, skipping remaining checks
	I0725 11:05:52.471510    4141 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr: exit status 7 (71.976208ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:15.427965    4149 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:15.428171    4149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:15.428175    4149 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:15.428178    4149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:15.428344    4149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:15.428504    4149 out.go:298] Setting JSON to false
	I0725 11:06:15.428517    4149 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:06:15.428553    4149 notify.go:220] Checking for updates...
	I0725 11:06:15.428754    4149 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:15.428762    4149 status.go:255] checking status of multinode-638000 ...
	I0725 11:06:15.429038    4149 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:06:15.429042    4149 status.go:343] host is not running, skipping remaining checks
	I0725 11:06:15.429045    4149 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-638000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (33.127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (49.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-638000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-638000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-638000: (3.850261875s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-638000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-638000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.211912208s)

                                                
                                                
-- stdout --
	* [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	* Restarting existing qemu2 VM for "multinode-638000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-638000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:19.402350    4175 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:19.402523    4175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:19.402527    4175 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:19.402531    4175 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:19.402676    4175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:19.403859    4175 out.go:298] Setting JSON to false
	I0725 11:06:19.423102    4175 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3943,"bootTime":1721926836,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:06:19.423186    4175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:06:19.426879    4175 out.go:177] * [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:06:19.433856    4175 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:06:19.433942    4175 notify.go:220] Checking for updates...
	I0725 11:06:19.440788    4175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:06:19.443835    4175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:06:19.446801    4175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:06:19.449814    4175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:06:19.452826    4175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:06:19.454389    4175 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:19.454447    4175 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:06:19.458725    4175 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:06:19.465676    4175 start.go:297] selected driver: qemu2
	I0725 11:06:19.465685    4175 start.go:901] validating driver "qemu2" against &{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:06:19.465760    4175 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:06:19.468072    4175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:06:19.468111    4175 cni.go:84] Creating CNI manager for ""
	I0725 11:06:19.468116    4175 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 11:06:19.468157    4175 start.go:340] cluster config:
	{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:06:19.471755    4175 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:19.478823    4175 out.go:177] * Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	I0725 11:06:19.482766    4175 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:06:19.482785    4175 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:06:19.482793    4175 cache.go:56] Caching tarball of preloaded images
	I0725 11:06:19.482849    4175 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:06:19.482855    4175 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:06:19.482910    4175 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/multinode-638000/config.json ...
	I0725 11:06:19.483329    4175 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:06:19.483368    4175 start.go:364] duration metric: took 32.625µs to acquireMachinesLock for "multinode-638000"
	I0725 11:06:19.483379    4175 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:06:19.483384    4175 fix.go:54] fixHost starting: 
	I0725 11:06:19.483514    4175 fix.go:112] recreateIfNeeded on multinode-638000: state=Stopped err=<nil>
	W0725 11:06:19.483523    4175 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:06:19.491693    4175 out.go:177] * Restarting existing qemu2 VM for "multinode-638000" ...
	I0725 11:06:19.495789    4175 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:06:19.495826    4175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:eb:85:2b:3c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:06:19.498026    4175 main.go:141] libmachine: STDOUT: 
	I0725 11:06:19.498047    4175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:06:19.498082    4175 fix.go:56] duration metric: took 14.69825ms for fixHost
	I0725 11:06:19.498087    4175 start.go:83] releasing machines lock for "multinode-638000", held for 14.713625ms
	W0725 11:06:19.498095    4175 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:06:19.498134    4175 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:19.498139    4175 start.go:729] Will try again in 5 seconds ...
	I0725 11:06:24.499915    4175 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:06:24.500271    4175 start.go:364] duration metric: took 279.917µs to acquireMachinesLock for "multinode-638000"
	I0725 11:06:24.500419    4175 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:06:24.500441    4175 fix.go:54] fixHost starting: 
	I0725 11:06:24.501141    4175 fix.go:112] recreateIfNeeded on multinode-638000: state=Stopped err=<nil>
	W0725 11:06:24.501169    4175 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:06:24.509545    4175 out.go:177] * Restarting existing qemu2 VM for "multinode-638000" ...
	I0725 11:06:24.512497    4175 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:06:24.512741    4175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:eb:85:2b:3c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:06:24.521532    4175 main.go:141] libmachine: STDOUT: 
	I0725 11:06:24.521605    4175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:06:24.521690    4175 fix.go:56] duration metric: took 21.246042ms for fixHost
	I0725 11:06:24.521710    4175 start.go:83] releasing machines lock for "multinode-638000", held for 21.413292ms
	W0725 11:06:24.521950    4175 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:24.528539    4175 out.go:177] 
	W0725 11:06:24.532575    4175 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:06:24.532622    4175 out.go:239] * 
	* 
	W0725 11:06:24.535473    4175 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:06:24.542515    4175 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-638000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-638000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (32.944541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 node delete m03: exit status 83 (39.457542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-638000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-638000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr: exit status 7 (28.843416ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:24.724753    4190 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:24.724909    4190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:24.724912    4190 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:24.724915    4190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:24.725038    4190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:24.725153    4190 out.go:298] Setting JSON to false
	I0725 11:06:24.725163    4190 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:06:24.725232    4190 notify.go:220] Checking for updates...
	I0725 11:06:24.725364    4190 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:24.725371    4190 status.go:255] checking status of multinode-638000 ...
	I0725 11:06:24.725602    4190 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:06:24.725607    4190 status.go:343] host is not running, skipping remaining checks
	I0725 11:06:24.725609    4190 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (29.454917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-638000 stop: (3.499870958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status: exit status 7 (62.409958ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr: exit status 7 (31.856042ms)

                                                
                                                
-- stdout --
	multinode-638000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:28.348835    4214 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:28.348983    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:28.348986    4214 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:28.348988    4214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:28.349137    4214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:28.349251    4214 out.go:298] Setting JSON to false
	I0725 11:06:28.349263    4214 mustload.go:65] Loading cluster: multinode-638000
	I0725 11:06:28.349327    4214 notify.go:220] Checking for updates...
	I0725 11:06:28.349460    4214 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:28.349466    4214 status.go:255] checking status of multinode-638000 ...
	I0725 11:06:28.349667    4214 status.go:330] multinode-638000 host status = "Stopped" (err=<nil>)
	I0725 11:06:28.349671    4214 status.go:343] host is not running, skipping remaining checks
	I0725 11:06:28.349673    4214 status.go:257] multinode-638000 status: &{Name:multinode-638000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr": multinode-638000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-638000 status --alsologtostderr": multinode-638000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (30.056083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-638000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-638000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.179780042s)

                                                
                                                
-- stdout --
	* [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	* Restarting existing qemu2 VM for "multinode-638000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-638000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:28.408600    4218 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:28.408717    4218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:28.408720    4218 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:28.408723    4218 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:28.408835    4218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:28.409829    4218 out.go:298] Setting JSON to false
	I0725 11:06:28.425695    4218 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3952,"bootTime":1721926836,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:06:28.425766    4218 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:06:28.430555    4218 out.go:177] * [multinode-638000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:06:28.437497    4218 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:06:28.437521    4218 notify.go:220] Checking for updates...
	I0725 11:06:28.444453    4218 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:06:28.447460    4218 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:06:28.450440    4218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:06:28.453444    4218 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:06:28.456449    4218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:06:28.459685    4218 config.go:182] Loaded profile config "multinode-638000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:28.459953    4218 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:06:28.464426    4218 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:06:28.470448    4218 start.go:297] selected driver: qemu2
	I0725 11:06:28.470456    4218 start.go:901] validating driver "qemu2" against &{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:06:28.470525    4218 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:06:28.472707    4218 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:06:28.472748    4218 cni.go:84] Creating CNI manager for ""
	I0725 11:06:28.472753    4218 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0725 11:06:28.472805    4218 start.go:340] cluster config:
	{Name:multinode-638000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-638000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:06:28.476358    4218 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:28.483454    4218 out.go:177] * Starting "multinode-638000" primary control-plane node in "multinode-638000" cluster
	I0725 11:06:28.487436    4218 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:06:28.487449    4218 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:06:28.487457    4218 cache.go:56] Caching tarball of preloaded images
	I0725 11:06:28.487502    4218 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:06:28.487507    4218 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:06:28.487555    4218 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/multinode-638000/config.json ...
	I0725 11:06:28.487970    4218 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:06:28.487999    4218 start.go:364] duration metric: took 23.834µs to acquireMachinesLock for "multinode-638000"
	I0725 11:06:28.488010    4218 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:06:28.488015    4218 fix.go:54] fixHost starting: 
	I0725 11:06:28.488136    4218 fix.go:112] recreateIfNeeded on multinode-638000: state=Stopped err=<nil>
	W0725 11:06:28.488146    4218 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:06:28.496408    4218 out.go:177] * Restarting existing qemu2 VM for "multinode-638000" ...
	I0725 11:06:28.500451    4218 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:06:28.500487    4218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:eb:85:2b:3c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:06:28.502588    4218 main.go:141] libmachine: STDOUT: 
	I0725 11:06:28.502611    4218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:06:28.502639    4218 fix.go:56] duration metric: took 14.624292ms for fixHost
	I0725 11:06:28.502644    4218 start.go:83] releasing machines lock for "multinode-638000", held for 14.63975ms
	W0725 11:06:28.502651    4218 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:06:28.502684    4218 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:28.502689    4218 start.go:729] Will try again in 5 seconds ...
	I0725 11:06:33.504762    4218 start.go:360] acquireMachinesLock for multinode-638000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:06:33.505222    4218 start.go:364] duration metric: took 366.083µs to acquireMachinesLock for "multinode-638000"
	I0725 11:06:33.505357    4218 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:06:33.505376    4218 fix.go:54] fixHost starting: 
	I0725 11:06:33.506075    4218 fix.go:112] recreateIfNeeded on multinode-638000: state=Stopped err=<nil>
	W0725 11:06:33.506108    4218 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:06:33.510839    4218 out.go:177] * Restarting existing qemu2 VM for "multinode-638000" ...
	I0725 11:06:33.515573    4218 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:06:33.515786    4218 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:eb:85:2b:3c:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/multinode-638000/disk.qcow2
	I0725 11:06:33.524504    4218 main.go:141] libmachine: STDOUT: 
	I0725 11:06:33.524588    4218 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:06:33.524723    4218 fix.go:56] duration metric: took 19.346625ms for fixHost
	I0725 11:06:33.524741    4218 start.go:83] releasing machines lock for "multinode-638000", held for 19.494042ms
	W0725 11:06:33.524952    4218 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-638000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:33.533605    4218 out.go:177] 
	W0725 11:06:33.536615    4218 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:06:33.536644    4218 out.go:239] * 
	* 
	W0725 11:06:33.539117    4218 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:06:33.547560    4218 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-638000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (67.1575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-638000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-638000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-638000-m01 --driver=qemu2 : exit status 80 (9.822421458s)

                                                
                                                
-- stdout --
	* [multinode-638000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-638000-m01" primary control-plane node in "multinode-638000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-638000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-638000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-638000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-638000-m02 --driver=qemu2 : exit status 80 (9.95360775s)

                                                
                                                
-- stdout --
	* [multinode-638000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-638000-m02" primary control-plane node in "multinode-638000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-638000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-638000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-638000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-638000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-638000: exit status 83 (77.396542ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-638000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-638000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-638000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-638000 -n multinode-638000: exit status 7 (29.200542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-638000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.00s)

                                                
                                    
x
+
TestPreload (9.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-529000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-529000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.731858875s)

                                                
                                                
-- stdout --
	* [test-preload-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-529000" primary control-plane node in "test-preload-529000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-529000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:06:53.758080    4282 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:06:53.758217    4282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:53.758219    4282 out.go:304] Setting ErrFile to fd 2...
	I0725 11:06:53.758232    4282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:06:53.758364    4282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:06:53.759425    4282 out.go:298] Setting JSON to false
	I0725 11:06:53.775372    4282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3977,"bootTime":1721926836,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:06:53.775461    4282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:06:53.781462    4282 out.go:177] * [test-preload-529000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:06:53.789622    4282 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:06:53.789667    4282 notify.go:220] Checking for updates...
	I0725 11:06:53.797572    4282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:06:53.800647    4282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:06:53.803621    4282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:06:53.806611    4282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:06:53.809652    4282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:06:53.811370    4282 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:06:53.811428    4282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:06:53.815626    4282 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:06:53.822434    4282 start.go:297] selected driver: qemu2
	I0725 11:06:53.822441    4282 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:06:53.822446    4282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:06:53.824575    4282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:06:53.827601    4282 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:06:53.830689    4282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:06:53.830705    4282 cni.go:84] Creating CNI manager for ""
	I0725 11:06:53.830711    4282 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:06:53.830723    4282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:06:53.830744    4282 start.go:340] cluster config:
	{Name:test-preload-529000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:06:53.834452    4282 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.841585    4282 out.go:177] * Starting "test-preload-529000" primary control-plane node in "test-preload-529000" cluster
	I0725 11:06:53.845661    4282 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0725 11:06:53.845746    4282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/test-preload-529000/config.json ...
	I0725 11:06:53.845762    4282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/test-preload-529000/config.json: {Name:mk8905405364f40badbcb47d5ef0798cfa2a396b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:06:53.845801    4282 cache.go:107] acquiring lock: {Name:mk5653692817070271d2551157724158266313f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.845811    4282 cache.go:107] acquiring lock: {Name:mka057bb1e6d5c562a688c256b56567f8b0105cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.845830    4282 cache.go:107] acquiring lock: {Name:mkc2e384b6740abfdbc0e6a01670cb3b4897a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.845874    4282 cache.go:107] acquiring lock: {Name:mk0dea9280c570ee6d809936c92d1628fbb95815 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.845940    4282 cache.go:107] acquiring lock: {Name:mke5377a69a152aa63fc3707ea8695120b1a2745 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.846013    4282 cache.go:107] acquiring lock: {Name:mka5a35cbdd72b550a416ec090d1c25b5c42d948 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.846048    4282 cache.go:107] acquiring lock: {Name:mkb3bce18900b827fef524d2540f71d65b803676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.846086    4282 cache.go:107] acquiring lock: {Name:mk19d916bcf3e04ec49b96df0ed851c3fdba0f09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:06:53.846236    4282 start.go:360] acquireMachinesLock for test-preload-529000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:06:53.846272    4282 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0725 11:06:53.846277    4282 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:06:53.846279    4282 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 11:06:53.846285    4282 start.go:364] duration metric: took 33.666µs to acquireMachinesLock for "test-preload-529000"
	I0725 11:06:53.846285    4282 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0725 11:06:53.846289    4282 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0725 11:06:53.846303    4282 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0725 11:06:53.846298    4282 start.go:93] Provisioning new machine with config: &{Name:test-preload-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:06:53.846328    4282 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:06:53.846447    4282 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:06:53.846768    4282 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:06:53.850651    4282 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:06:53.857822    4282 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:06:53.857828    4282 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0725 11:06:53.857924    4282 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0725 11:06:53.857966    4282 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0725 11:06:53.858127    4282 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0725 11:06:53.858465    4282 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0725 11:06:53.860009    4282 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:06:53.860008    4282 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:06:53.868651    4282 start.go:159] libmachine.API.Create for "test-preload-529000" (driver="qemu2")
	I0725 11:06:53.868676    4282 client.go:168] LocalClient.Create starting
	I0725 11:06:53.868754    4282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:06:53.868785    4282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:06:53.868793    4282 main.go:141] libmachine: Parsing certificate...
	I0725 11:06:53.868835    4282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:06:53.868859    4282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:06:53.868875    4282 main.go:141] libmachine: Parsing certificate...
	I0725 11:06:53.869243    4282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:06:54.019728    4282 main.go:141] libmachine: Creating SSH key...
	I0725 11:06:54.057646    4282 main.go:141] libmachine: Creating Disk image...
	I0725 11:06:54.057665    4282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:06:54.057853    4282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:06:54.067814    4282 main.go:141] libmachine: STDOUT: 
	I0725 11:06:54.067833    4282 main.go:141] libmachine: STDERR: 
	I0725 11:06:54.067885    4282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2 +20000M
	I0725 11:06:54.076316    4282 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:06:54.076332    4282 main.go:141] libmachine: STDERR: 
	I0725 11:06:54.076354    4282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:06:54.076359    4282 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:06:54.076370    4282 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:06:54.076395    4282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:d2:12:c7:dd:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:06:54.078398    4282 main.go:141] libmachine: STDOUT: 
	I0725 11:06:54.078414    4282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:06:54.078428    4282 client.go:171] duration metric: took 209.755208ms to LocalClient.Create
	I0725 11:06:54.320083    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0725 11:06:54.323315    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0725 11:06:54.327121    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0725 11:06:54.368823    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0725 11:06:54.376471    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0725 11:06:54.403148    4282 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0725 11:06:54.403175    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0725 11:06:54.432586    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0725 11:06:54.510623    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0725 11:06:54.510675    4282 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 664.764292ms
	I0725 11:06:54.510762    4282 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0725 11:06:54.694201    4282 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0725 11:06:54.694287    4282 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 11:06:54.981147    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 11:06:54.981216    4282 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.135444917s
	I0725 11:06:54.981241    4282 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 11:06:56.078564    4282 start.go:128] duration metric: took 2.232279584s to createHost
	I0725 11:06:56.078616    4282 start.go:83] releasing machines lock for "test-preload-529000", held for 2.232388209s
	W0725 11:06:56.078666    4282 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:56.088952    4282 out.go:177] * Deleting "test-preload-529000" in qemu2 ...
	W0725 11:06:56.115327    4282 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:06:56.115357    4282 start.go:729] Will try again in 5 seconds ...
	I0725 11:06:56.681066    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0725 11:06:56.681138    4282 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.835257666s
	I0725 11:06:56.681171    4282 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0725 11:06:57.236395    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0725 11:06:57.236445    4282 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.390740084s
	I0725 11:06:57.236471    4282 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0725 11:06:58.302517    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0725 11:06:58.302572    4282 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.456846333s
	I0725 11:06:58.302598    4282 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0725 11:06:58.641684    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0725 11:06:58.641731    4282 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.796063916s
	I0725 11:06:58.641760    4282 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0725 11:06:59.219867    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0725 11:06:59.219908    4282 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.373977208s
	I0725 11:06:59.219933    4282 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0725 11:07:01.115410    4282 start.go:360] acquireMachinesLock for test-preload-529000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:07:01.115800    4282 start.go:364] duration metric: took 310.625µs to acquireMachinesLock for "test-preload-529000"
	I0725 11:07:01.115937    4282 start.go:93] Provisioning new machine with config: &{Name:test-preload-529000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-529000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:07:01.116181    4282 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:07:01.128795    4282 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:07:01.177722    4282 start.go:159] libmachine.API.Create for "test-preload-529000" (driver="qemu2")
	I0725 11:07:01.177786    4282 client.go:168] LocalClient.Create starting
	I0725 11:07:01.177906    4282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:07:01.177963    4282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:01.177989    4282 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:01.178051    4282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:07:01.178110    4282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:07:01.178124    4282 main.go:141] libmachine: Parsing certificate...
	I0725 11:07:01.178634    4282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:07:01.336398    4282 main.go:141] libmachine: Creating SSH key...
	I0725 11:07:01.391447    4282 main.go:141] libmachine: Creating Disk image...
	I0725 11:07:01.391454    4282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:07:01.391597    4282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:07:01.400876    4282 main.go:141] libmachine: STDOUT: 
	I0725 11:07:01.400914    4282 main.go:141] libmachine: STDERR: 
	I0725 11:07:01.400973    4282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2 +20000M
	I0725 11:07:01.409010    4282 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:07:01.409029    4282 main.go:141] libmachine: STDERR: 
	I0725 11:07:01.409040    4282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:07:01.409046    4282 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:07:01.409057    4282 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:07:01.409091    4282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:a2:d4:ef:be:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/test-preload-529000/disk.qcow2
	I0725 11:07:01.410846    4282 main.go:141] libmachine: STDOUT: 
	I0725 11:07:01.410876    4282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:07:01.410886    4282 client.go:171] duration metric: took 233.102417ms to LocalClient.Create
	I0725 11:07:01.898639    4282 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0725 11:07:01.898713    4282 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.052996334s
	I0725 11:07:01.898737    4282 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0725 11:07:01.898799    4282 cache.go:87] Successfully saved all images to host disk.
	I0725 11:07:03.413107    4282 start.go:128] duration metric: took 2.296953791s to createHost
	I0725 11:07:03.413176    4282 start.go:83] releasing machines lock for "test-preload-529000", held for 2.297417625s
	W0725 11:07:03.413520    4282 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-529000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:07:03.429223    4282 out.go:177] 
	W0725 11:07:03.433259    4282 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:07:03.433303    4282 out.go:239] * 
	* 
	W0725 11:07:03.435892    4282 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:07:03.447187    4282 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-529000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-25 11:07:03.465574 -0700 PDT m=+2349.396359001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-529000 -n test-preload-529000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-529000 -n test-preload-529000: exit status 7 (64.46525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-529000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-529000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-529000
--- FAIL: TestPreload (9.88s)

                                                
                                    
x
+
TestScheduledStopUnix (10.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-925000 --memory=2048 --driver=qemu2 
E0725 11:07:10.217281    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-925000 --memory=2048 --driver=qemu2 : exit status 80 (10.135054042s)

                                                
                                                
-- stdout --
	* [scheduled-stop-925000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-925000" primary control-plane node in "scheduled-stop-925000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-925000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-925000" primary control-plane node in "scheduled-stop-925000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-925000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-925000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-25 11:07:13.744391 -0700 PDT m=+2359.675480626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-925000 -n scheduled-stop-925000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-925000 -n scheduled-stop-925000: exit status 7 (67.540667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-925000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-925000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-925000
--- FAIL: TestScheduledStopUnix (10.28s)

                                                
                                    
x
+
TestSkaffold (12.68s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3915640557 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3915640557 version: (1.066050041s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-941000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-941000 --memory=2600 --driver=qemu2 : exit status 80 (9.88874525s)

                                                
                                                
-- stdout --
	* [skaffold-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-941000" primary control-plane node in "skaffold-941000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-941000" primary control-plane node in "skaffold-941000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-25 11:07:26.425294 -0700 PDT m=+2372.356759167
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-941000 -n skaffold-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-941000 -n skaffold-941000: exit status 7 (62.419209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-941000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-941000
--- FAIL: TestSkaffold (12.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (592.25s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3371620696 start -p running-upgrade-159000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3371620696 start -p running-upgrade-159000 --memory=2200 --vm-driver=qemu2 : (54.860785083s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-159000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0725 11:09:15.234954    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 11:10:13.283127    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-159000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.788314s)

                                                
                                                
-- stdout --
	* [running-upgrade-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-159000" primary control-plane node in "running-upgrade-159000" cluster
	* Updating the running qemu2 "running-upgrade-159000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:09:04.542273    4677 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:09:04.542400    4677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:09:04.542403    4677 out.go:304] Setting ErrFile to fd 2...
	I0725 11:09:04.542406    4677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:09:04.542549    4677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:09:04.543580    4677 out.go:298] Setting JSON to false
	I0725 11:09:04.560029    4677 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4108,"bootTime":1721926836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:09:04.560123    4677 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:09:04.564927    4677 out.go:177] * [running-upgrade-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:09:04.571931    4677 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:09:04.571980    4677 notify.go:220] Checking for updates...
	I0725 11:09:04.577840    4677 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:09:04.580851    4677 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:09:04.582085    4677 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:09:04.584851    4677 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:09:04.587887    4677 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:09:04.591177    4677 config.go:182] Loaded profile config "running-upgrade-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:09:04.594831    4677 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 11:09:04.597833    4677 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:09:04.601857    4677 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:09:04.611825    4677 start.go:297] selected driver: qemu2
	I0725 11:09:04.611830    4677 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50303 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:09:04.611885    4677 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:09:04.614365    4677 cni.go:84] Creating CNI manager for ""
	I0725 11:09:04.614465    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:09:04.614495    4677 start.go:340] cluster config:
	{Name:running-upgrade-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50303 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:09:04.614554    4677 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:09:04.621922    4677 out.go:177] * Starting "running-upgrade-159000" primary control-plane node in "running-upgrade-159000" cluster
	I0725 11:09:04.625821    4677 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:09:04.625837    4677 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0725 11:09:04.625845    4677 cache.go:56] Caching tarball of preloaded images
	I0725 11:09:04.625916    4677 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:09:04.625922    4677 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0725 11:09:04.625974    4677 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/config.json ...
	I0725 11:09:04.626338    4677 start.go:360] acquireMachinesLock for running-upgrade-159000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:09:04.626373    4677 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "running-upgrade-159000"
	I0725 11:09:04.626383    4677 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:09:04.626388    4677 fix.go:54] fixHost starting: 
	I0725 11:09:04.627011    4677 fix.go:112] recreateIfNeeded on running-upgrade-159000: state=Running err=<nil>
	W0725 11:09:04.627020    4677 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:09:04.630830    4677 out.go:177] * Updating the running qemu2 "running-upgrade-159000" VM ...
	I0725 11:09:04.640836    4677 machine.go:94] provisionDockerMachine start ...
	I0725 11:09:04.640925    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:04.641075    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:04.641083    4677 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 11:09:04.715122    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-159000
	
	I0725 11:09:04.715140    4677 buildroot.go:166] provisioning hostname "running-upgrade-159000"
	I0725 11:09:04.715185    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:04.715308    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:04.715313    4677 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-159000 && echo "running-upgrade-159000" | sudo tee /etc/hostname
	I0725 11:09:04.782075    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-159000
	
	I0725 11:09:04.782122    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:04.782232    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:04.782240    4677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-159000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-159000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-159000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 11:09:04.843605    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 11:09:04.843615    4677 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1196/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1196/.minikube}
	I0725 11:09:04.843621    4677 buildroot.go:174] setting up certificates
	I0725 11:09:04.843626    4677 provision.go:84] configureAuth start
	I0725 11:09:04.843634    4677 provision.go:143] copyHostCerts
	I0725 11:09:04.843687    4677 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem, removing ...
	I0725 11:09:04.843693    4677 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem
	I0725 11:09:04.843797    4677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem (1675 bytes)
	I0725 11:09:04.843971    4677 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem, removing ...
	I0725 11:09:04.843975    4677 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem
	I0725 11:09:04.844014    4677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem (1078 bytes)
	I0725 11:09:04.844108    4677 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem, removing ...
	I0725 11:09:04.844115    4677 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem
	I0725 11:09:04.844153    4677 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem (1123 bytes)
	I0725 11:09:04.844235    4677 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-159000 san=[127.0.0.1 localhost minikube running-upgrade-159000]
	I0725 11:09:04.887079    4677 provision.go:177] copyRemoteCerts
	I0725 11:09:04.887122    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 11:09:04.887130    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:09:04.919622    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 11:09:04.927566    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0725 11:09:04.934253    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 11:09:04.941439    4677 provision.go:87] duration metric: took 97.808417ms to configureAuth
	I0725 11:09:04.941448    4677 buildroot.go:189] setting minikube options for container-runtime
	I0725 11:09:04.941561    4677 config.go:182] Loaded profile config "running-upgrade-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:09:04.941593    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:04.941685    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:04.941693    4677 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 11:09:05.003639    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 11:09:05.003652    4677 buildroot.go:70] root file system type: tmpfs
	I0725 11:09:05.003709    4677 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 11:09:05.003765    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:05.003880    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:05.003914    4677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 11:09:05.067072    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 11:09:05.067119    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:05.067225    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:05.067233    4677 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 11:09:05.133480    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 11:09:05.133491    4677 machine.go:97] duration metric: took 492.655333ms to provisionDockerMachine
	I0725 11:09:05.133497    4677 start.go:293] postStartSetup for "running-upgrade-159000" (driver="qemu2")
	I0725 11:09:05.133503    4677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 11:09:05.133561    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 11:09:05.133573    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:09:05.167266    4677 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 11:09:05.168729    4677 info.go:137] Remote host: Buildroot 2021.02.12
	I0725 11:09:05.168737    4677 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/addons for local assets ...
	I0725 11:09:05.168802    4677 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/files for local assets ...
	I0725 11:09:05.168902    4677 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem -> 16942.pem in /etc/ssl/certs
	I0725 11:09:05.169006    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 11:09:05.171619    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:09:05.178786    4677 start.go:296] duration metric: took 45.28575ms for postStartSetup
	I0725 11:09:05.178801    4677 fix.go:56] duration metric: took 552.429584ms for fixHost
	I0725 11:09:05.178833    4677 main.go:141] libmachine: Using SSH client type: native
	I0725 11:09:05.178939    4677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10460ea10] 0x104611270 <nil>  [] 0s} localhost 50271 <nil> <nil>}
	I0725 11:09:05.178944    4677 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 11:09:05.243441    4677 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721930945.722559055
	
	I0725 11:09:05.243447    4677 fix.go:216] guest clock: 1721930945.722559055
	I0725 11:09:05.243451    4677 fix.go:229] Guest: 2024-07-25 11:09:05.722559055 -0700 PDT Remote: 2024-07-25 11:09:05.178803 -0700 PDT m=+0.657462834 (delta=543.756055ms)
	I0725 11:09:05.243461    4677 fix.go:200] guest clock delta is within tolerance: 543.756055ms
	I0725 11:09:05.243464    4677 start.go:83] releasing machines lock for "running-upgrade-159000", held for 617.105584ms
	I0725 11:09:05.243515    4677 ssh_runner.go:195] Run: cat /version.json
	I0725 11:09:05.243523    4677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 11:09:05.243524    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:09:05.243544    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	W0725 11:09:05.244051    4677 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50271: connect: connection refused
	I0725 11:09:05.244070    4677 retry.go:31] will retry after 264.446765ms: dial tcp [::1]:50271: connect: connection refused
	W0725 11:09:05.560175    4677 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0725 11:09:05.560346    4677 ssh_runner.go:195] Run: systemctl --version
	I0725 11:09:05.564061    4677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 11:09:05.567147    4677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 11:09:05.567189    4677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0725 11:09:05.571986    4677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0725 11:09:05.578983    4677 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 11:09:05.578998    4677 start.go:495] detecting cgroup driver to use...
	I0725 11:09:05.579121    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:09:05.587210    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0725 11:09:05.591353    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 11:09:05.595328    4677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 11:09:05.595359    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 11:09:05.599071    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:09:05.602605    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 11:09:05.606085    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:09:05.609073    4677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 11:09:05.611873    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 11:09:05.615030    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 11:09:05.618191    4677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 11:09:05.621002    4677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 11:09:05.623607    4677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 11:09:05.626228    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:05.723301    4677 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 11:09:05.730030    4677 start.go:495] detecting cgroup driver to use...
	I0725 11:09:05.730092    4677 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 11:09:05.738360    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:09:05.743189    4677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 11:09:05.750015    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:09:05.754419    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:09:05.758600    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:09:05.763942    4677 ssh_runner.go:195] Run: which cri-dockerd
	I0725 11:09:05.765412    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 11:09:05.767923    4677 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 11:09:05.772986    4677 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 11:09:05.861600    4677 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 11:09:05.959362    4677 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 11:09:05.959429    4677 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 11:09:05.964977    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:06.051327    4677 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:09:09.338528    4677 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.287282708s)
	I0725 11:09:09.338604    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 11:09:09.343531    4677 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 11:09:09.350065    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:09:09.354996    4677 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 11:09:09.435688    4677 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 11:09:09.497208    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:09.579412    4677 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 11:09:09.585807    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:09:09.590470    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:09.659628    4677 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 11:09:09.699270    4677 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 11:09:09.699351    4677 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 11:09:09.702178    4677 start.go:563] Will wait 60s for crictl version
	I0725 11:09:09.702232    4677 ssh_runner.go:195] Run: which crictl
	I0725 11:09:09.703507    4677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 11:09:09.715920    4677 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0725 11:09:09.715985    4677 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:09:09.728889    4677 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:09:09.745564    4677 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0725 11:09:09.745687    4677 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0725 11:09:09.747021    4677 kubeadm.go:883] updating cluster {Name:running-upgrade-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50303 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0725 11:09:09.747069    4677 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:09:09.747107    4677 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:09:09.757347    4677 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:09:09.757355    4677 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:09:09.757396    4677 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:09:09.760321    4677 ssh_runner.go:195] Run: which lz4
	I0725 11:09:09.761669    4677 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0725 11:09:09.763030    4677 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 11:09:09.763041    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0725 11:09:10.676855    4677 docker.go:649] duration metric: took 915.24425ms to copy over tarball
	I0725 11:09:10.676909    4677 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 11:09:11.923682    4677 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.246795542s)
	I0725 11:09:11.923695    4677 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 11:09:11.939914    4677 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:09:11.943264    4677 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0725 11:09:11.948790    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:12.026819    4677 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:09:12.403105    4677 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:09:12.421817    4677 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:09:12.421827    4677 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:09:12.421832    4677 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 11:09:12.426852    4677 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:09:12.429045    4677 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:09:12.431357    4677 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:09:12.431443    4677 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:09:12.432789    4677 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:09:12.433029    4677 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:09:12.434166    4677 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:09:12.435358    4677 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:09:12.435388    4677 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:09:12.435867    4677 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:09:12.436560    4677 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:09:12.436750    4677 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0725 11:09:12.437909    4677 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:09:12.437961    4677 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:09:12.438794    4677 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0725 11:09:12.439446    4677 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:09:12.848191    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:09:12.862004    4677 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0725 11:09:12.862031    4677 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:09:12.862079    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:09:12.873761    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0725 11:09:12.873933    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:09:12.884638    4677 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0725 11:09:12.884664    4677 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:09:12.884716    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:09:12.884724    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:09:12.887368    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:09:12.897858    4677 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0725 11:09:12.897881    4677 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:09:12.897936    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:09:12.904763    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0725 11:09:12.909397    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0725 11:09:12.911098    4677 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0725 11:09:12.911115    4677 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:09:12.911151    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:09:12.914160    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0725 11:09:12.921851    4677 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0725 11:09:12.921869    4677 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:09:12.921915    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0725 11:09:12.923782    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0725 11:09:12.924567    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	W0725 11:09:12.936937    4677 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0725 11:09:12.937062    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:09:12.941174    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0725 11:09:12.941199    4677 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0725 11:09:12.941213    4677 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0725 11:09:12.941251    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0725 11:09:12.941298    4677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:09:12.956998    4677 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0725 11:09:12.957020    4677 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:09:12.957029    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0725 11:09:12.957041    4677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0725 11:09:12.957053    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0725 11:09:12.957079    4677 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:09:12.957133    4677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0725 11:09:12.983258    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0725 11:09:12.983303    4677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0725 11:09:12.983318    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0725 11:09:12.983363    4677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:09:13.011419    4677 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0725 11:09:13.011432    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0725 11:09:13.011922    4677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0725 11:09:13.011942    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W0725 11:09:13.033163    4677 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0725 11:09:13.033275    4677 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:09:13.101699    4677 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0725 11:09:13.121085    4677 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0725 11:09:13.121113    4677 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:09:13.121168    4677 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:09:13.136821    4677 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:09:13.136835    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0725 11:09:13.166843    4677 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 11:09:13.166982    4677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:09:13.236943    4677 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0725 11:09:13.236946    4677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 11:09:13.236981    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0725 11:09:13.294983    4677 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:09:13.295008    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0725 11:09:13.622677    4677 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 11:09:13.622702    4677 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:09:13.622714    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0725 11:09:13.760423    4677 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0725 11:09:13.760460    4677 cache_images.go:92] duration metric: took 1.338660875s to LoadCachedImages
	W0725 11:09:13.760507    4677 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0725 11:09:13.760515    4677 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0725 11:09:13.760582    4677 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-159000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 11:09:13.760653    4677 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 11:09:13.774947    4677 cni.go:84] Creating CNI manager for ""
	I0725 11:09:13.774960    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:09:13.774970    4677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 11:09:13.774982    4677 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-159000 NodeName:running-upgrade-159000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 11:09:13.775058    4677 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-159000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 11:09:13.775114    4677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0725 11:09:13.778741    4677 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 11:09:13.778772    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 11:09:13.781567    4677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0725 11:09:13.787332    4677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 11:09:13.792456    4677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0725 11:09:13.797731    4677 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0725 11:09:13.799024    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:09:13.890743    4677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:09:13.895865    4677 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000 for IP: 10.0.2.15
	I0725 11:09:13.895872    4677 certs.go:194] generating shared ca certs ...
	I0725 11:09:13.895880    4677 certs.go:226] acquiring lock for ca certs: {Name:mk89636080cfada095e98fa6c0bd32580553affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:09:13.896048    4677 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key
	I0725 11:09:13.896096    4677 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key
	I0725 11:09:13.896104    4677 certs.go:256] generating profile certs ...
	I0725 11:09:13.896187    4677 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.key
	I0725 11:09:13.896205    4677 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key.6ebfbf24
	I0725 11:09:13.896213    4677 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt.6ebfbf24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0725 11:09:14.051960    4677 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt.6ebfbf24 ...
	I0725 11:09:14.051971    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt.6ebfbf24: {Name:mk3e1f5fed881d779204741c120f55796daccf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:09:14.052443    4677 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key.6ebfbf24 ...
	I0725 11:09:14.052450    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key.6ebfbf24: {Name:mkbf480087c1030ff1bb5e7387ead17089264b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:09:14.052606    4677 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt.6ebfbf24 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt
	I0725 11:09:14.052765    4677 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key.6ebfbf24 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key
	I0725 11:09:14.052925    4677 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/proxy-client.key
	I0725 11:09:14.053058    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem (1338 bytes)
	W0725 11:09:14.053093    4677 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694_empty.pem, impossibly tiny 0 bytes
	I0725 11:09:14.053100    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 11:09:14.053128    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem (1078 bytes)
	I0725 11:09:14.053154    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem (1123 bytes)
	I0725 11:09:14.053181    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem (1675 bytes)
	I0725 11:09:14.053236    4677 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:09:14.053573    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 11:09:14.062839    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 11:09:14.070469    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 11:09:14.077976    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 11:09:14.084854    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 11:09:14.091150    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 11:09:14.098366    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 11:09:14.105861    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 11:09:14.112636    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /usr/share/ca-certificates/16942.pem (1708 bytes)
	I0725 11:09:14.118950    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 11:09:14.126137    4677 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem --> /usr/share/ca-certificates/1694.pem (1338 bytes)
	I0725 11:09:14.132823    4677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 11:09:14.137324    4677 ssh_runner.go:195] Run: openssl version
	I0725 11:09:14.139082    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694.pem && ln -fs /usr/share/ca-certificates/1694.pem /etc/ssl/certs/1694.pem"
	I0725 11:09:14.142404    4677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694.pem
	I0725 11:09:14.143837    4677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:36 /usr/share/ca-certificates/1694.pem
	I0725 11:09:14.143858    4677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694.pem
	I0725 11:09:14.145552    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694.pem /etc/ssl/certs/51391683.0"
	I0725 11:09:14.148061    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16942.pem && ln -fs /usr/share/ca-certificates/16942.pem /etc/ssl/certs/16942.pem"
	I0725 11:09:14.151325    4677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16942.pem
	I0725 11:09:14.152749    4677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:36 /usr/share/ca-certificates/16942.pem
	I0725 11:09:14.152767    4677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16942.pem
	I0725 11:09:14.154480    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16942.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 11:09:14.157246    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 11:09:14.160027    4677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:09:14.161517    4677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:09:14.161541    4677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:09:14.163274    4677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 11:09:14.166301    4677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 11:09:14.167746    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 11:09:14.169542    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 11:09:14.171307    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 11:09:14.173120    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 11:09:14.174991    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 11:09:14.176833    4677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 11:09:14.178597    4677 kubeadm.go:392] StartCluster: {Name:running-upgrade-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50303 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-159000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:09:14.178672    4677 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:09:14.189280    4677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 11:09:14.192338    4677 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 11:09:14.192344    4677 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 11:09:14.192367    4677 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 11:09:14.195337    4677 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:09:14.195562    4677 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-159000" does not appear in /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:09:14.195617    4677 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1196/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-159000" cluster setting kubeconfig missing "running-upgrade-159000" context setting]
	I0725 11:09:14.195743    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:09:14.196396    4677 kapi.go:59] client config for running-upgrade-159000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a3fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:09:14.196743    4677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 11:09:14.199589    4677 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-159000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0725 11:09:14.199595    4677 kubeadm.go:1160] stopping kube-system containers ...
	I0725 11:09:14.199634    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:09:14.212568    4677 docker.go:483] Stopping containers: [1ab7c68e5bcb 4e72586e4f02 7f5ae9df8f8f 0f72f05bb585 63b686f25808 92288953f452 c3a8ebdf5f1e af52e586dda6 0ddaaec7a5f2 ede47b8eaf34 2e5f977234d2 77241e4fa4cf]
	I0725 11:09:14.212641    4677 ssh_runner.go:195] Run: docker stop 1ab7c68e5bcb 4e72586e4f02 7f5ae9df8f8f 0f72f05bb585 63b686f25808 92288953f452 c3a8ebdf5f1e af52e586dda6 0ddaaec7a5f2 ede47b8eaf34 2e5f977234d2 77241e4fa4cf
	I0725 11:09:14.224865    4677 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 11:09:14.307784    4677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:09:14.311779    4677 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 25 18:08 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul 25 18:08 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 25 18:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 25 18:08 /etc/kubernetes/scheduler.conf
	
	I0725 11:09:14.311816    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf
	I0725 11:09:14.314948    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:09:14.314975    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:09:14.318281    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf
	I0725 11:09:14.321305    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:09:14.321332    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:09:14.324217    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf
	I0725 11:09:14.327098    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:09:14.327124    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:09:14.329860    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf
	I0725 11:09:14.332227    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:09:14.332246    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:09:14.335029    4677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:09:14.338101    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:09:14.358381    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:09:14.745690    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:09:14.979100    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:09:15.014576    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:09:15.039484    4677 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:09:15.039563    4677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:09:15.541726    4677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:09:16.041528    4677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:09:16.045728    4677 api_server.go:72] duration metric: took 1.006276166s to wait for apiserver process to appear ...
	I0725 11:09:16.045737    4677 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:09:16.045745    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:21.046145    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:21.046197    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:26.047612    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:26.047699    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:31.048391    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:31.048472    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:36.049327    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:36.049417    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:41.050635    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:41.050715    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:46.052330    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:46.052392    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:51.052806    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:51.052888    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:09:56.054505    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:09:56.054584    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:01.057120    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:01.057195    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:06.059674    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:06.059716    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:11.061958    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:11.062052    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:16.064602    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:16.064776    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:16.080955    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:16.081046    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:16.093611    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:16.093678    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:16.104540    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:16.104605    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:16.115029    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:16.115114    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:16.132968    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:16.133035    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:16.144325    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:16.144386    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:16.153589    4677 logs.go:276] 0 containers: []
	W0725 11:10:16.153607    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:16.153663    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:16.164080    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:16.164099    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:16.164114    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:16.178642    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:16.178655    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:16.205380    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:16.205387    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:16.276246    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:16.276262    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:16.291538    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:16.291551    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:16.303233    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:16.303246    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:16.314856    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:16.314870    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:16.352700    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:16.352711    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:16.378314    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:16.378324    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:16.392772    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:16.392782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:16.407598    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:16.407607    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:16.424154    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:16.424165    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:16.437562    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:16.437573    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:16.449016    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:16.449031    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:16.453157    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:16.453166    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:16.466868    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:16.466878    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:16.484448    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:16.484459    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:18.998007    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:24.000745    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:24.001127    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:24.027996    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:24.028134    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:24.045965    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:24.046052    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:24.059061    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:24.059127    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:24.070797    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:24.070859    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:24.081292    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:24.081359    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:24.091894    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:24.091959    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:24.101589    4677 logs.go:276] 0 containers: []
	W0725 11:10:24.101602    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:24.101660    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:24.112282    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:24.112302    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:24.112307    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:24.124272    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:24.124284    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:24.136345    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:24.136358    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:24.147689    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:24.147702    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:24.159845    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:24.159859    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:24.171589    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:24.171600    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:24.206332    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:24.206344    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:24.230610    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:24.230620    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:24.241416    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:24.241426    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:24.257176    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:24.257187    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:24.274752    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:24.274762    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:24.288632    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:24.288643    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:24.302937    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:24.302949    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:24.316797    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:24.316823    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:24.328023    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:24.328037    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:24.352387    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:24.352395    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:24.388226    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:24.388234    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:26.894205    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:31.895300    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:31.895578    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:31.921651    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:31.921762    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:31.938571    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:31.938641    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:31.957445    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:31.957509    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:31.968158    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:31.968225    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:31.986634    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:31.986694    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:31.997388    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:31.997454    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:32.007263    4677 logs.go:276] 0 containers: []
	W0725 11:10:32.007273    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:32.007327    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:32.017590    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:32.017611    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:32.017621    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:32.050975    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:32.050985    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:32.065167    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:32.065181    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:32.076332    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:32.076342    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:32.087371    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:32.087386    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:32.102338    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:32.102353    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:32.116285    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:32.116295    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:32.128296    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:32.128309    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:32.139993    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:32.140003    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:32.153640    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:32.153648    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:32.158525    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:32.158530    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:32.192867    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:32.192877    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:32.208433    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:32.208441    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:32.226157    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:32.226168    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:32.252350    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:32.252356    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:32.263823    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:32.263831    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:32.301044    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:32.301050    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:34.817648    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:39.820270    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:39.820521    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:39.845816    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:39.845926    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:39.862434    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:39.862503    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:39.875197    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:39.875255    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:39.885652    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:39.885723    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:39.896321    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:39.896384    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:39.907672    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:39.907741    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:39.917875    4677 logs.go:276] 0 containers: []
	W0725 11:10:39.917887    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:39.917944    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:39.927939    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:39.927958    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:39.927963    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:39.955211    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:39.955221    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:39.968948    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:39.968957    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:39.980609    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:39.980621    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:39.996068    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:39.996081    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:40.012977    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:40.012988    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:40.034702    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:40.034711    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:40.069126    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:40.069137    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:40.080556    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:40.080567    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:40.094931    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:40.094942    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:40.099316    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:40.099323    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:40.120500    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:40.120512    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:40.135728    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:40.135739    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:40.171283    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:40.171293    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:40.181935    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:40.181946    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:40.207471    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:40.207479    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:40.218648    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:40.218662    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:42.733676    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:47.735666    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:47.736084    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:47.775376    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:47.775497    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:47.797495    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:47.797612    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:47.812946    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:47.813010    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:47.828743    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:47.828814    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:47.839696    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:47.839750    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:47.850323    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:47.850390    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:47.861173    4677 logs.go:276] 0 containers: []
	W0725 11:10:47.861190    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:47.861239    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:47.876450    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:47.876470    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:47.876474    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:47.888260    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:47.888271    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:47.912613    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:47.912624    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:47.917194    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:47.917200    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:47.931361    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:47.931374    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:47.943101    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:47.943110    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:47.958522    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:47.958534    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:47.972835    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:47.972845    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:47.984431    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:47.984444    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:48.002404    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:48.002416    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:48.039818    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:48.039829    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:48.054218    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:48.054230    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:48.065917    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:48.065926    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:48.077963    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:48.077973    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:48.114524    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:48.114532    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:48.139927    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:48.139939    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:48.151788    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:48.151799    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:50.666555    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:10:55.667870    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:10:55.668368    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:10:55.708200    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:10:55.708341    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:10:55.733371    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:10:55.733464    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:10:55.748015    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:10:55.748108    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:10:55.761063    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:10:55.761141    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:10:55.771620    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:10:55.771680    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:10:55.782182    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:10:55.782245    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:10:55.792693    4677 logs.go:276] 0 containers: []
	W0725 11:10:55.792706    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:10:55.792761    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:10:55.803404    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:10:55.803420    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:10:55.803425    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:10:55.827761    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:10:55.827774    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:10:55.838908    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:10:55.838919    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:10:55.851016    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:10:55.851028    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:10:55.887386    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:10:55.887396    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:10:55.891689    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:10:55.891697    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:10:55.920821    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:10:55.920833    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:10:55.935726    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:10:55.935740    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:10:55.952953    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:10:55.952964    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:10:55.965742    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:10:55.965755    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:10:55.979910    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:10:55.979922    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:10:55.995628    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:10:55.995638    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:10:56.006742    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:10:56.006752    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:10:56.023496    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:10:56.023509    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:10:56.057561    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:10:56.057573    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:10:56.071304    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:10:56.071318    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:10:56.083140    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:10:56.083150    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:10:58.609420    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:03.612059    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:03.612590    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:03.652661    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:03.652786    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:03.673984    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:03.674091    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:03.689466    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:03.689534    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:03.702027    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:03.702093    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:03.719390    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:03.719462    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:03.729662    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:03.729729    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:03.740011    4677 logs.go:276] 0 containers: []
	W0725 11:11:03.740025    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:03.740078    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:03.750275    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:03.750292    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:03.750298    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:03.788947    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:03.788957    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:03.814560    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:03.814573    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:03.826191    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:03.826203    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:03.838003    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:03.838016    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:03.874268    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:03.874280    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:03.889100    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:03.889115    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:03.901536    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:03.901549    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:03.913346    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:03.913357    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:03.932338    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:03.932352    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:03.943533    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:03.943545    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:03.966822    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:03.966829    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:03.970777    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:03.970782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:03.984273    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:03.984284    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:03.998355    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:03.998369    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:04.009531    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:04.009543    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:04.033350    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:04.033359    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:06.544723    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:11.545473    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:11.545659    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:11.567501    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:11.567596    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:11.581543    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:11.581610    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:11.592301    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:11.592364    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:11.602344    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:11.602411    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:11.612606    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:11.612664    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:11.623251    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:11.623316    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:11.633547    4677 logs.go:276] 0 containers: []
	W0725 11:11:11.633559    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:11.633608    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:11.649013    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:11.649039    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:11.649045    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:11.674096    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:11.674108    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:11.685276    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:11.685286    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:11.701841    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:11.701855    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:11.726984    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:11.726991    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:11.740504    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:11.740516    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:11.751508    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:11.751520    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:11.786999    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:11.787005    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:11.821458    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:11.821470    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:11.836069    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:11.836080    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:11.848573    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:11.848587    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:11.866124    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:11.866133    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:11.878699    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:11.878709    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:11.890313    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:11.890322    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:11.894620    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:11.894628    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:11.909002    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:11.909014    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:11.925281    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:11.925292    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:14.437471    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:19.439613    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:19.439874    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:19.478661    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:19.478783    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:19.499761    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:19.499851    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:19.514571    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:19.514642    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:19.526823    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:19.526890    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:19.537299    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:19.537364    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:19.548231    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:19.548292    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:19.558413    4677 logs.go:276] 0 containers: []
	W0725 11:11:19.558427    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:19.558475    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:19.569181    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:19.569199    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:19.569204    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:19.604490    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:19.604500    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:19.622718    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:19.622732    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:19.647508    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:19.647521    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:19.659212    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:19.659225    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:19.663655    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:19.663665    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:19.688501    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:19.688510    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:19.701163    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:19.701177    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:19.712650    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:19.712660    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:19.753602    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:19.753615    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:19.773982    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:19.773993    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:19.785884    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:19.785897    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:19.797575    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:19.797586    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:19.808859    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:19.808868    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:19.827430    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:19.827441    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:19.843968    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:19.843978    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:19.855896    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:19.855908    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:22.377086    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:27.379348    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:27.379552    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:27.397340    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:27.397410    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:27.414468    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:27.414540    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:27.425299    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:27.425373    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:27.436254    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:27.436323    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:27.447189    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:27.447257    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:27.460345    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:27.460409    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:27.470754    4677 logs.go:276] 0 containers: []
	W0725 11:11:27.470767    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:27.470826    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:27.491185    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:27.491201    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:27.491206    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:27.516967    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:27.516979    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:27.554303    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:27.554313    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:27.568724    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:27.568738    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:27.580452    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:27.580463    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:27.599049    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:27.599059    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:27.611100    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:27.611114    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:27.622725    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:27.622736    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:27.627000    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:27.627007    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:27.653051    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:27.653064    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:27.667654    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:27.667665    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:27.679659    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:27.679671    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:27.691378    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:27.691389    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:27.727733    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:27.727745    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:27.743765    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:27.743775    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:27.755206    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:27.755221    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:27.767103    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:27.767114    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:30.285935    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:35.286158    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:35.286320    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:35.297735    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:35.297804    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:35.309041    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:35.309111    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:35.320031    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:35.320092    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:35.330621    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:35.330687    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:35.341883    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:35.341951    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:35.357629    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:35.357698    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:35.368514    4677 logs.go:276] 0 containers: []
	W0725 11:11:35.368524    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:35.368578    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:35.387403    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:35.387424    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:35.387430    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:35.424785    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:35.424800    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:35.436901    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:35.436911    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:35.452191    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:35.452201    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:35.476189    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:35.476197    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:35.494261    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:35.494272    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:35.506227    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:35.506237    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:35.517998    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:35.518009    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:35.529746    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:35.529757    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:35.565106    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:35.565120    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:35.579730    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:35.579739    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:35.605750    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:35.605760    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:35.621800    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:35.621813    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:35.639194    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:35.639205    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:35.656799    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:35.656808    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:35.661271    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:35.661277    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:35.675767    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:35.675784    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:38.189666    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:43.191881    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:43.191993    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:43.205190    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:43.205261    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:43.216672    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:43.216746    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:43.227475    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:43.227532    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:43.238609    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:43.238681    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:43.248919    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:43.248996    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:43.259778    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:43.259849    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:43.269851    4677 logs.go:276] 0 containers: []
	W0725 11:11:43.269865    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:43.269917    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:43.280140    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:43.280156    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:43.280161    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:43.291798    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:43.291808    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:43.315319    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:43.315326    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:43.329879    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:43.329889    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:43.342227    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:43.342242    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:43.355939    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:43.355949    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:43.367833    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:43.367849    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:43.405075    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:43.405084    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:43.422431    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:43.422442    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:43.443759    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:43.443770    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:43.454973    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:43.454983    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:43.465917    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:43.465928    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:43.500101    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:43.500117    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:43.519602    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:43.519612    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:43.530380    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:43.530389    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:43.541880    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:43.541889    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:43.546228    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:43.546237    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:46.073388    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:51.075832    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:51.076060    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:51.096167    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:51.096259    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:51.111439    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:51.111513    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:51.124092    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:51.124164    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:51.134205    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:51.134272    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:51.144729    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:51.144805    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:51.155474    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:51.155540    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:51.165320    4677 logs.go:276] 0 containers: []
	W0725 11:11:51.165331    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:51.165386    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:51.176352    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:51.176372    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:51.176377    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:51.214975    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:51.214987    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:11:51.219456    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:51.219461    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:51.233573    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:51.233583    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:51.245130    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:51.245140    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:51.262705    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:51.262714    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:51.273700    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:51.273713    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:51.284999    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:51.285011    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:51.321461    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:51.321472    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:51.335561    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:51.335571    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:51.353172    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:51.353183    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:51.368591    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:51.368602    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:51.394167    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:51.394177    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:51.407471    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:51.407482    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:51.436801    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:51.436810    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:51.448558    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:51.448569    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:51.463654    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:51.463667    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:53.985126    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:11:58.987314    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:11:58.987685    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:11:59.023621    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:11:59.023754    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:11:59.044781    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:11:59.044878    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:11:59.059561    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:11:59.059628    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:11:59.072031    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:11:59.072107    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:11:59.083402    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:11:59.083471    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:11:59.094010    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:11:59.094084    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:11:59.104743    4677 logs.go:276] 0 containers: []
	W0725 11:11:59.104756    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:11:59.104818    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:11:59.115494    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:11:59.115514    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:11:59.115519    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:11:59.127272    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:11:59.127283    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:11:59.138551    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:11:59.138563    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:11:59.150675    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:11:59.150687    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:11:59.162446    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:11:59.162459    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:11:59.180135    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:11:59.180145    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:11:59.191492    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:11:59.191504    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:11:59.202936    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:11:59.202947    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:11:59.227303    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:11:59.227313    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:11:59.241994    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:11:59.242004    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:11:59.259612    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:11:59.259624    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:11:59.277137    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:11:59.277146    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:11:59.288881    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:11:59.288894    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:11:59.326843    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:11:59.326851    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:11:59.361538    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:11:59.361549    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:11:59.379489    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:11:59.379503    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:11:59.404353    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:11:59.404365    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:01.911207    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:06.911711    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:06.911812    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:06.923044    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:06.923115    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:06.933650    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:06.933724    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:06.944641    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:06.944707    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:06.955273    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:06.955344    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:06.966279    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:06.966345    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:06.976541    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:06.976618    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:06.987127    4677 logs.go:276] 0 containers: []
	W0725 11:12:06.987142    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:06.987200    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:06.998052    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:06.998069    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:06.998074    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:07.011995    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:07.012009    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:07.023838    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:07.023848    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:07.059998    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:07.060011    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:07.074724    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:07.074736    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:07.088346    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:07.088358    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:07.101090    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:07.101104    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:07.127927    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:07.127949    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:07.143006    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:07.143021    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:07.158931    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:07.158945    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:07.171659    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:07.171671    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:07.188133    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:07.188149    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:07.200865    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:07.200879    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:07.241409    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:07.241428    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:07.246258    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:07.246267    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:07.272832    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:07.272845    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:07.292856    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:07.292873    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:09.807566    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:14.809237    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:14.809396    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:14.821095    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:14.821165    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:14.831650    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:14.831721    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:14.842375    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:14.842435    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:14.863086    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:14.863156    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:14.873938    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:14.874000    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:14.888390    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:14.888464    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:14.898566    4677 logs.go:276] 0 containers: []
	W0725 11:12:14.898578    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:14.898635    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:14.908781    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:14.908804    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:14.908809    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:14.945263    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:14.945272    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:14.970536    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:14.970552    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:14.986341    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:14.986356    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:14.998117    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:14.998128    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:15.033545    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:15.033561    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:15.047896    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:15.047910    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:15.059840    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:15.059851    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:15.085019    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:15.085028    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:15.096723    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:15.096736    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:15.108502    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:15.108511    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:15.122276    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:15.122289    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:15.133633    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:15.133648    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:15.157845    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:15.157853    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:15.161974    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:15.161980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:15.175902    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:15.175912    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:15.198421    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:15.198431    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:17.711912    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:22.714610    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:22.715001    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:22.756221    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:22.756343    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:22.778596    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:22.778679    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:22.792609    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:22.792682    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:22.804969    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:22.805040    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:22.815930    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:22.815996    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:22.826920    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:22.826990    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:22.837542    4677 logs.go:276] 0 containers: []
	W0725 11:12:22.837556    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:22.837607    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:22.848157    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:22.848174    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:22.848179    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:22.872736    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:22.872745    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:22.889021    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:22.889034    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:22.901845    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:22.901859    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:22.925607    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:22.925614    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:22.960773    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:22.960786    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:22.972614    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:22.972629    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:22.990376    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:22.990387    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:23.019055    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:23.019068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:23.031491    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:23.031504    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:23.043771    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:23.043781    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:23.048653    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:23.048662    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:23.062976    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:23.062986    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:23.078071    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:23.078084    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:23.091710    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:23.091723    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:23.105548    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:23.105559    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:23.119124    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:23.119134    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:25.658279    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:30.660372    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:30.660488    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:30.672271    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:30.672344    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:30.682937    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:30.683009    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:30.693257    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:30.693325    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:30.709179    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:30.709249    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:30.720478    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:30.720544    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:30.731506    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:30.731571    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:30.741823    4677 logs.go:276] 0 containers: []
	W0725 11:12:30.741839    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:30.741891    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:30.752790    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:30.752808    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:30.752814    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:30.778338    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:30.778363    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:30.794788    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:30.794802    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:30.812474    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:30.812488    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:30.834061    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:30.834075    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:30.848054    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:30.848067    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:30.854666    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:30.854681    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:30.874280    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:30.874301    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:30.888065    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:30.888077    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:30.938134    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:30.938147    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:30.952279    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:30.952292    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:30.965703    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:30.965713    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:30.989289    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:30.989300    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:31.029626    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:31.029649    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:31.045808    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:31.045827    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:31.059482    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:31.059494    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:31.083963    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:31.083984    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:33.599823    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:38.602086    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:38.602590    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:38.642218    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:38.642355    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:38.663278    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:38.663375    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:38.678581    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:38.678655    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:38.691107    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:38.691180    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:38.702243    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:38.702303    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:38.712692    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:38.712761    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:38.723006    4677 logs.go:276] 0 containers: []
	W0725 11:12:38.723016    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:38.723069    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:38.738964    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:38.739000    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:38.739008    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:38.760210    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:38.760224    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:38.801249    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:38.801264    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:38.815141    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:38.815151    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:38.833158    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:38.833169    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:38.871109    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:38.871127    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:38.876279    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:38.876290    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:38.888285    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:38.888298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:38.903692    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:38.903702    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:38.915743    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:38.915753    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:38.927523    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:38.927536    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:38.945910    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:38.945925    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:38.957118    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:38.957129    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:38.999284    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:38.999298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:39.011098    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:39.011109    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:39.027586    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:39.027600    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:39.050009    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:39.050018    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:41.564124    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:46.566169    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:46.566266    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:46.577375    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:46.577438    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:46.589484    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:46.589545    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:46.603003    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:46.603074    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:46.614174    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:46.614251    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:46.625013    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:46.625083    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:46.636081    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:46.636152    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:46.647121    4677 logs.go:276] 0 containers: []
	W0725 11:12:46.647138    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:46.647198    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:46.659662    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:46.659681    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:46.659686    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:46.698641    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:46.698652    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:46.737263    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:46.737276    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:46.763376    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:46.763388    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:46.781832    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:46.781844    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:46.797601    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:46.797614    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:46.822252    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:46.822262    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:46.844426    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:46.844439    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:46.849289    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:46.849301    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:46.870194    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:46.870213    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:46.885680    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:46.885691    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:46.898625    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:46.898638    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:46.911564    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:46.911577    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:46.926350    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:46.926363    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:46.940241    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:46.940254    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:46.957246    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:46.957261    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:46.970563    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:46.970575    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:49.489278    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:54.490899    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:54.491007    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:54.502368    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:54.502439    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:54.512616    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:54.512682    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:54.523273    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:54.523336    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:54.534091    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:54.534163    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:54.548943    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:54.549006    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:54.559413    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:54.559473    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:54.571874    4677 logs.go:276] 0 containers: []
	W0725 11:12:54.571886    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:54.571949    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:54.582642    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:54.582660    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:54.582678    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:54.594314    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:54.594326    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:54.632849    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:54.632860    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:54.636830    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:54.636839    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:54.674937    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:54.674948    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:54.703973    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:54.703997    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:54.717966    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:54.717980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:54.732762    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:54.732773    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:54.744012    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:54.744022    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:54.755771    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:54.755782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:54.768142    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:54.768153    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:54.779586    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:54.779597    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:54.794203    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:54.794211    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:54.809195    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:54.809204    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:54.827010    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:54.827022    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:54.851318    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:54.851325    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:54.862792    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:54.862803    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:57.378441    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:02.380467    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:02.380674    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:02.392759    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:13:02.392835    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:02.403829    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:13:02.403903    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:02.414767    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:13:02.414839    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:02.424905    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:13:02.424978    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:02.435508    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:13:02.435573    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:02.446240    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:13:02.446329    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:02.456259    4677 logs.go:276] 0 containers: []
	W0725 11:13:02.456271    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:02.456328    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:02.467002    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:13:02.467021    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:13:02.467026    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:13:02.481186    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:13:02.481196    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:13:02.495700    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:13:02.495710    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:13:02.511857    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:13:02.511867    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:13:02.523977    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:02.523988    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:02.545821    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:13:02.545828    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:02.557702    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:02.557714    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:02.594159    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:13:02.594167    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:13:02.611969    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:13:02.611980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:13:02.624543    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:13:02.624552    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:13:02.636145    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:13:02.636155    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:13:02.653195    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:13:02.653204    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:13:02.664996    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:02.665008    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:02.669290    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:13:02.669296    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:13:02.694406    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:13:02.694420    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:13:02.705502    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:13:02.705515    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:13:02.721891    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:02.721901    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:05.257723    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:10.259825    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:10.260031    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:10.279749    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:13:10.279853    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:10.294504    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:13:10.294576    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:10.311107    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:13:10.311182    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:10.321995    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:13:10.322064    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:10.332533    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:13:10.332608    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:10.343557    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:13:10.343624    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:10.354052    4677 logs.go:276] 0 containers: []
	W0725 11:13:10.354060    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:10.354112    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:10.365121    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:13:10.365139    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:13:10.365145    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:13:10.376856    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:13:10.376868    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:10.393805    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:10.393817    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:10.431744    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:13:10.431752    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:13:10.453052    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:13:10.453068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:13:10.470170    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:10.470181    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:10.474346    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:13:10.474352    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:13:10.489883    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:13:10.489896    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:13:10.502081    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:13:10.502090    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:13:10.514118    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:13:10.514130    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:13:10.525857    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:13:10.525869    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:13:10.544911    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:10.544923    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:10.590480    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:13:10.590490    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:13:10.604778    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:13:10.604791    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:13:10.623004    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:13:10.623017    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:13:10.647695    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:13:10.647708    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:13:10.659233    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:10.659245    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:13.183392    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:18.185508    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:18.185566    4677 kubeadm.go:597] duration metric: took 4m4.000448125s to restartPrimaryControlPlane
	W0725 11:13:18.185614    4677 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 11:13:18.185637    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 11:13:19.191312    4677 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005689666s)
	I0725 11:13:19.191388    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:13:19.196467    4677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:13:19.199432    4677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:13:19.202666    4677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:13:19.202672    4677 kubeadm.go:157] found existing configuration files:
	
	I0725 11:13:19.202700    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf
	I0725 11:13:19.205189    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:13:19.205213    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:13:19.207982    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf
	I0725 11:13:19.210981    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:13:19.211005    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:13:19.214355    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf
	I0725 11:13:19.216787    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:13:19.216812    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:13:19.219812    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf
	I0725 11:13:19.222876    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:13:19.222897    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:13:19.225700    4677 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 11:13:19.241985    4677 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0725 11:13:19.242078    4677 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 11:13:19.289504    4677 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 11:13:19.289562    4677 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 11:13:19.289622    4677 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 11:13:19.340546    4677 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 11:13:19.344685    4677 out.go:204]   - Generating certificates and keys ...
	I0725 11:13:19.344725    4677 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 11:13:19.344754    4677 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 11:13:19.344789    4677 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 11:13:19.344824    4677 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 11:13:19.344862    4677 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 11:13:19.344890    4677 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 11:13:19.344920    4677 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 11:13:19.344949    4677 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 11:13:19.344982    4677 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 11:13:19.345015    4677 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 11:13:19.345032    4677 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 11:13:19.345057    4677 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 11:13:19.397345    4677 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 11:13:19.604011    4677 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 11:13:19.707080    4677 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 11:13:19.846052    4677 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 11:13:19.883115    4677 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 11:13:19.883514    4677 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 11:13:19.883587    4677 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 11:13:19.969998    4677 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 11:13:19.974197    4677 out.go:204]   - Booting up control plane ...
	I0725 11:13:19.974245    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 11:13:19.974286    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 11:13:19.974317    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 11:13:19.974379    4677 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 11:13:19.974560    4677 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 11:13:24.979045    4677 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.005678 seconds
	I0725 11:13:24.979261    4677 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 11:13:24.989802    4677 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 11:13:25.499257    4677 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 11:13:25.499360    4677 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-159000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 11:13:26.007921    4677 kubeadm.go:310] [bootstrap-token] Using token: yq65yj.7wo91qypo083m5v5
	I0725 11:13:26.014304    4677 out.go:204]   - Configuring RBAC rules ...
	I0725 11:13:26.014407    4677 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 11:13:26.014530    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 11:13:26.021423    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 11:13:26.022983    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 11:13:26.024696    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 11:13:26.026489    4677 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 11:13:26.031234    4677 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 11:13:26.205807    4677 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 11:13:26.413676    4677 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 11:13:26.414128    4677 kubeadm.go:310] 
	I0725 11:13:26.414163    4677 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 11:13:26.414167    4677 kubeadm.go:310] 
	I0725 11:13:26.414218    4677 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 11:13:26.414224    4677 kubeadm.go:310] 
	I0725 11:13:26.414247    4677 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 11:13:26.414292    4677 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 11:13:26.414326    4677 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 11:13:26.414331    4677 kubeadm.go:310] 
	I0725 11:13:26.414370    4677 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 11:13:26.414376    4677 kubeadm.go:310] 
	I0725 11:13:26.414407    4677 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 11:13:26.414410    4677 kubeadm.go:310] 
	I0725 11:13:26.414446    4677 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 11:13:26.414511    4677 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 11:13:26.414560    4677 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 11:13:26.414563    4677 kubeadm.go:310] 
	I0725 11:13:26.414620    4677 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 11:13:26.414663    4677 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 11:13:26.414672    4677 kubeadm.go:310] 
	I0725 11:13:26.414718    4677 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yq65yj.7wo91qypo083m5v5 \
	I0725 11:13:26.414799    4677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 \
	I0725 11:13:26.414811    4677 kubeadm.go:310] 	--control-plane 
	I0725 11:13:26.414819    4677 kubeadm.go:310] 
	I0725 11:13:26.414880    4677 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 11:13:26.414884    4677 kubeadm.go:310] 
	I0725 11:13:26.414941    4677 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yq65yj.7wo91qypo083m5v5 \
	I0725 11:13:26.415016    4677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 
	I0725 11:13:26.415083    4677 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 11:13:26.415091    4677 cni.go:84] Creating CNI manager for ""
	I0725 11:13:26.415100    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:13:26.419167    4677 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 11:13:26.426119    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 11:13:26.429977    4677 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 11:13:26.434872    4677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 11:13:26.434925    4677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 11:13:26.434941    4677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-159000 minikube.k8s.io/updated_at=2024_07_25T11_13_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=running-upgrade-159000 minikube.k8s.io/primary=true
	I0725 11:13:26.439049    4677 ops.go:34] apiserver oom_adj: -16
	I0725 11:13:26.477982    4677 kubeadm.go:1113] duration metric: took 43.101791ms to wait for elevateKubeSystemPrivileges
	I0725 11:13:26.478112    4677 kubeadm.go:394] duration metric: took 4m12.306996458s to StartCluster
	I0725 11:13:26.478124    4677 settings.go:142] acquiring lock: {Name:mk9c0f6a74d3ffd78a971cee1d6827e5c0e0b5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:13:26.478210    4677 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:13:26.478589    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:13:26.478814    4677 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:13:26.478821    4677 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 11:13:26.478866    4677 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-159000"
	I0725 11:13:26.478879    4677 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-159000"
	W0725 11:13:26.478886    4677 addons.go:243] addon storage-provisioner should already be in state true
	I0725 11:13:26.478898    4677 host.go:66] Checking if "running-upgrade-159000" exists ...
	I0725 11:13:26.478925    4677 config.go:182] Loaded profile config "running-upgrade-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:13:26.478927    4677 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-159000"
	I0725 11:13:26.478938    4677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-159000"
	I0725 11:13:26.479920    4677 kapi.go:59] client config for running-upgrade-159000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a3fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:13:26.480050    4677 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-159000"
	W0725 11:13:26.480055    4677 addons.go:243] addon default-storageclass should already be in state true
	I0725 11:13:26.480062    4677 host.go:66] Checking if "running-upgrade-159000" exists ...
	I0725 11:13:26.483143    4677 out.go:177] * Verifying Kubernetes components...
	I0725 11:13:26.483455    4677 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 11:13:26.487199    4677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 11:13:26.487206    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:13:26.491034    4677 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:13:26.495077    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:13:26.499121    4677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:13:26.499129    4677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 11:13:26.499136    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:13:26.586218    4677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:13:26.591158    4677 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:13:26.591195    4677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:13:26.595460    4677 api_server.go:72] duration metric: took 116.63975ms to wait for apiserver process to appear ...
	I0725 11:13:26.595468    4677 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:13:26.595474    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:26.633520    4677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 11:13:26.646026    4677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:13:31.595784    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:31.595861    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:36.596242    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:36.596268    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:41.597121    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:41.597153    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:46.597278    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:46.597328    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:51.597629    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:51.597675    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:56.598132    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:56.598190    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0725 11:13:57.025436    4677 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0725 11:13:57.028864    4677 out.go:177] * Enabled addons: storage-provisioner
	I0725 11:13:57.036711    4677 addons.go:510] duration metric: took 30.558778292s for enable addons: enabled=[storage-provisioner]
	I0725 11:14:01.598791    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:01.598862    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:06.599175    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:06.599193    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:11.599927    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:11.599950    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:16.601070    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:16.601098    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:21.602458    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:21.602479    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:26.604190    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:26.604383    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:26.640579    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:26.640657    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:26.652517    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:26.652587    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:26.662964    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:26.663031    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:26.675988    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:26.676058    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:26.686703    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:26.686775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:26.699300    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:26.699370    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:26.710013    4677 logs.go:276] 0 containers: []
	W0725 11:14:26.710026    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:26.710080    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:26.720621    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:26.720640    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:26.720648    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:26.732554    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:26.732564    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:26.747053    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:26.747068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:26.764211    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:26.764227    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:26.776085    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:26.776097    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:26.780796    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:26.780801    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:26.816005    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:26.816017    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:26.831039    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:26.831050    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:26.844693    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:26.844703    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:26.868130    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:26.868138    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:26.879764    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:26.879775    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:26.912613    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:26.912624    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:26.924428    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:26.924440    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:29.440707    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:34.443091    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:34.443248    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:34.456441    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:34.456519    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:34.466759    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:34.466829    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:34.477463    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:34.477528    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:34.495097    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:34.495169    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:34.505423    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:34.505498    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:34.516200    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:34.516263    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:34.535113    4677 logs.go:276] 0 containers: []
	W0725 11:14:34.535123    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:34.535177    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:34.545660    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:34.545674    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:34.545679    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:34.567782    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:34.567797    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:34.579453    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:34.579467    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:34.604668    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:34.604680    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:34.618294    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:34.618305    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:34.653688    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:34.653699    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:34.689339    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:34.689352    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:34.705869    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:34.705884    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:34.717691    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:34.717706    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:34.729536    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:34.729549    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:34.741748    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:34.741765    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:34.746384    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:34.746394    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:34.761287    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:34.761298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:37.277056    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:42.279465    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:42.279765    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:42.307233    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:42.307341    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:42.323516    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:42.323608    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:42.336810    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:42.336879    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:42.348587    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:42.348656    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:42.359097    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:42.359173    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:42.370198    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:42.370267    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:42.380182    4677 logs.go:276] 0 containers: []
	W0725 11:14:42.380192    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:42.380243    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:42.390896    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:42.390912    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:42.390920    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:42.405611    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:42.405622    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:42.417658    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:42.417669    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:42.429266    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:42.429278    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:42.444746    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:42.444756    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:42.477620    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:42.477630    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:42.482114    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:42.482122    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:42.517849    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:42.517859    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:42.532122    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:42.532134    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:42.545252    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:42.545263    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:42.556826    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:42.556837    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:42.581263    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:42.581273    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:42.597917    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:42.597928    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:45.113782    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:50.112904    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:50.113243    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:50.139184    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:50.139299    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:50.160098    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:50.160182    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:50.173389    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:50.173458    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:50.187850    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:50.187921    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:50.198629    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:50.198702    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:50.209710    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:50.209775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:50.220432    4677 logs.go:276] 0 containers: []
	W0725 11:14:50.220442    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:50.220493    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:50.231075    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:50.231090    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:50.231095    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:50.242425    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:50.242436    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:50.255735    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:50.255748    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:50.270124    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:50.270134    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:50.287115    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:50.287125    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:50.312963    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:50.312978    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:50.347371    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:50.347383    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:50.362418    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:50.362429    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:50.376022    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:50.376034    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:50.387979    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:50.387993    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:50.399235    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:50.399248    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:50.410719    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:50.410729    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:50.447525    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:50.447539    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:52.952629    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:57.952730    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:57.953117    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:57.994358    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:57.994480    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:58.012215    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:58.012302    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:58.024783    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:58.024865    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:58.036444    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:58.036526    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:58.047150    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:58.047217    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:58.057369    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:58.057434    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:58.067680    4677 logs.go:276] 0 containers: []
	W0725 11:14:58.067691    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:58.067748    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:58.078181    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:58.078197    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:58.078202    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:58.094367    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:58.094380    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:58.118724    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:58.118733    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:58.152983    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:58.152994    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:58.187967    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:58.187980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:58.202628    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:58.202643    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:58.214363    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:58.214377    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:58.226258    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:58.226269    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:58.240717    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:58.240728    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:58.252488    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:58.252498    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:58.257376    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:58.257383    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:58.271543    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:58.271556    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:58.293246    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:58.293260    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:00.812960    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:05.813871    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:05.814070    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:05.833579    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:05.833648    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:05.845839    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:05.845908    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:05.856443    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:05.856507    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:05.867223    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:05.867288    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:05.877824    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:05.877885    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:05.888577    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:05.888644    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:05.899380    4677 logs.go:276] 0 containers: []
	W0725 11:15:05.899390    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:05.899438    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:05.909784    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:05.909798    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:05.909806    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:05.929577    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:05.929588    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:05.944840    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:05.944853    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:05.956999    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:05.957013    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:05.982997    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:05.983009    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:05.999736    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:05.999750    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:06.022856    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:06.022869    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:06.061606    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:06.061621    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:06.073819    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:06.073833    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:06.086622    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:06.086631    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:06.112124    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:06.112131    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:06.124295    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:06.124309    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:06.158717    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:06.158728    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:08.665026    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:13.666400    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:13.666580    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:13.680058    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:13.680136    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:13.691076    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:13.691149    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:13.702930    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:13.702993    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:13.713589    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:13.713655    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:13.724448    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:13.724513    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:13.735029    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:13.735093    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:13.746016    4677 logs.go:276] 0 containers: []
	W0725 11:15:13.746027    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:13.746082    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:13.756361    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:13.756375    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:13.756380    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:13.761435    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:13.761441    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:13.795823    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:13.795837    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:13.810006    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:13.810014    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:13.824962    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:13.824973    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:13.849561    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:13.849569    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:13.860591    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:13.860602    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:13.893931    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:13.893941    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:13.917364    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:13.917374    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:13.936038    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:13.936048    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:13.947740    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:13.947749    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:13.965766    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:13.965777    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:13.977533    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:13.977542    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:16.494909    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:21.495360    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:21.495731    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:21.531214    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:21.531326    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:21.548457    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:21.548537    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:21.562305    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:21.562384    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:21.573885    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:21.573949    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:21.584766    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:21.584832    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:21.600820    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:21.600891    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:21.615117    4677 logs.go:276] 0 containers: []
	W0725 11:15:21.615128    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:21.615178    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:21.625839    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:21.625854    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:21.625859    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:21.642976    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:21.642990    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:21.656128    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:21.656141    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:21.671404    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:21.671414    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:21.684779    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:21.684792    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:21.708543    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:21.708551    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:21.742330    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:21.742339    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:21.746688    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:21.746695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:21.758100    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:21.758111    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:21.769886    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:21.769896    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:21.787150    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:21.787161    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:21.798718    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:21.798734    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:21.880647    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:21.880660    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:24.396373    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:29.398218    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:29.398401    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:29.415802    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:29.415895    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:29.428877    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:29.428952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:29.439954    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:29.440026    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:29.456920    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:29.456986    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:29.467765    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:29.467843    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:29.478750    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:29.478818    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:29.493069    4677 logs.go:276] 0 containers: []
	W0725 11:15:29.493079    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:29.493130    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:29.503481    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:29.503497    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:29.503502    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:29.520850    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:29.520864    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:29.532514    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:29.532529    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:29.543950    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:29.543961    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:29.548464    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:29.548470    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:29.583956    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:29.583966    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:29.604065    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:29.604076    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:29.615921    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:29.615932    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:29.631316    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:29.631341    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:29.666684    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:29.666695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:29.680866    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:29.680878    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:29.692776    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:29.692787    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:29.704566    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:29.704576    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:32.230870    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:37.232998    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:37.233214    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:37.257649    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:37.257763    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:37.274493    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:37.274575    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:37.289547    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:37.289617    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:37.300733    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:37.300804    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:37.311168    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:37.311239    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:37.321751    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:37.321819    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:37.332697    4677 logs.go:276] 0 containers: []
	W0725 11:15:37.332708    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:37.332765    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:37.342940    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:37.342956    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:37.342962    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:37.356956    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:37.356966    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:37.373524    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:37.373537    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:37.385238    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:37.385249    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:37.396466    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:37.396479    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:37.411170    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:37.411181    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:37.422624    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:37.422633    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:37.454811    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:37.454818    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:37.459288    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:37.459296    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:37.499282    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:37.499296    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:37.510732    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:37.510743    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:37.530828    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:37.530841    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:37.542881    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:37.542891    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:40.069684    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:45.071723    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:45.071938    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:45.101448    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:45.101564    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:45.117514    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:45.117607    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:45.132178    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:15:45.132244    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:45.146488    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:45.146564    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:45.156902    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:45.156971    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:45.167651    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:45.167717    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:45.177529    4677 logs.go:276] 0 containers: []
	W0725 11:15:45.177542    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:45.177598    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:45.192456    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:45.192475    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:45.192481    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:45.196969    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:15:45.196976    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:15:45.208129    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:45.208142    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:45.222282    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:45.222297    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:45.234302    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:45.234314    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:45.246111    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:45.246122    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:45.264317    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:45.264327    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:45.288766    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:45.288775    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:45.322573    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:45.322582    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:45.359139    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:15:45.359149    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:15:45.370257    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:45.370268    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:45.382300    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:45.382310    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:45.397337    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:45.397348    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:45.411387    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:45.411398    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:45.429944    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:45.429957    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:47.947810    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:52.950404    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:52.950553    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:52.971023    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:52.971121    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:52.985873    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:52.985937    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:52.997727    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:15:52.997801    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:53.008624    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:53.008692    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:53.019048    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:53.019112    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:53.029773    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:53.029836    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:53.040301    4677 logs.go:276] 0 containers: []
	W0725 11:15:53.040310    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:53.040362    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:53.050612    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:53.050630    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:15:53.050635    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:15:53.062150    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:53.062164    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:53.073738    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:53.073749    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:53.108521    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:53.108534    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:53.122685    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:53.122698    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:53.134500    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:53.134511    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:53.159338    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:53.159352    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:53.163755    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:15:53.163762    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:15:53.185090    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:53.185102    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:53.211082    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:53.211093    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:53.225850    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:53.225861    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:53.245300    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:53.245314    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:53.260952    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:53.260963    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:53.272675    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:53.272686    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:53.284284    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:53.284297    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:55.818854    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:00.821042    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:00.821215    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:00.840150    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:00.840242    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:00.853504    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:00.853567    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:00.865904    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:00.865974    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:00.882983    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:00.883060    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:00.895962    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:00.896032    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:00.907194    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:00.907260    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:00.917680    4677 logs.go:276] 0 containers: []
	W0725 11:16:00.917692    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:00.917747    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:00.928503    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:00.928521    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:00.928527    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:00.963772    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:00.963782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:00.977749    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:00.977761    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:00.991416    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:00.991427    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:01.003936    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:01.003946    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:01.015533    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:01.015548    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:01.027644    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:01.027657    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:01.042333    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:01.042345    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:01.067200    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:01.067208    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:01.101706    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:01.101717    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:01.120414    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:01.120425    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:01.125157    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:01.125166    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:01.136832    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:01.136842    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:01.149962    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:01.149972    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:01.161377    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:01.161389    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:03.675212    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:08.677459    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:08.677695    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:08.704315    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:08.704439    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:08.726909    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:08.726996    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:08.739687    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:08.739768    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:08.750733    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:08.750799    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:08.761003    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:08.761065    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:08.771163    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:08.771231    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:08.784459    4677 logs.go:276] 0 containers: []
	W0725 11:16:08.784469    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:08.784530    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:08.795397    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:08.795414    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:08.795419    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:08.806954    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:08.806965    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:08.831515    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:08.831524    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:08.843081    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:08.843094    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:08.876346    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:08.876354    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:08.889506    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:08.889517    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:08.901785    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:08.901799    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:08.906722    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:08.906728    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:08.927155    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:08.927166    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:08.938496    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:08.938506    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:08.950144    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:08.950155    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:08.967181    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:08.967191    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:09.002050    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:09.002063    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:09.016642    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:09.016654    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:09.030772    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:09.030785    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:11.543914    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:16.545551    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:16.545775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:16.563266    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:16.563352    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:16.576745    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:16.576819    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:16.588329    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:16.588394    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:16.599036    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:16.599102    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:16.609397    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:16.609464    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:16.620265    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:16.620332    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:16.630600    4677 logs.go:276] 0 containers: []
	W0725 11:16:16.630612    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:16.630669    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:16.640651    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:16.640668    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:16.640674    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:16.674933    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:16.674948    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:16.686325    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:16.686337    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:16.703807    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:16.703818    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:16.715354    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:16.715365    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:16.747625    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:16.747633    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:16.761775    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:16.761785    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:16.773713    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:16.773724    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:16.790360    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:16.790372    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:16.805459    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:16.805468    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:16.817367    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:16.817378    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:16.829968    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:16.829980    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:16.835050    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:16.835057    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:16.849742    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:16.849754    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:16.861179    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:16.861192    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:19.386721    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:24.389139    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:24.389288    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:24.402869    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:24.402943    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:24.413380    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:24.413449    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:24.423659    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:24.423724    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:24.433884    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:24.433952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:24.447523    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:24.447593    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:24.458083    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:24.458142    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:24.468381    4677 logs.go:276] 0 containers: []
	W0725 11:16:24.468393    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:24.468442    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:24.479269    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:24.479290    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:24.479295    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:24.493047    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:24.493056    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:24.505220    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:24.505230    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:24.523409    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:24.523421    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:24.557739    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:24.557748    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:24.570336    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:24.570347    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:24.582167    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:24.582179    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:24.597010    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:24.597021    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:24.618207    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:24.618218    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:24.630180    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:24.630193    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:24.635043    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:24.635050    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:24.655758    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:24.655769    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:24.667030    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:24.667040    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:24.690617    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:24.690627    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:24.701744    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:24.701754    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:27.240173    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:32.241650    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:32.241961    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:32.260082    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:32.260184    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:32.273558    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:32.273622    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:32.289020    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:32.289097    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:32.300286    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:32.300354    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:32.315631    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:32.315697    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:32.326243    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:32.326310    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:32.336583    4677 logs.go:276] 0 containers: []
	W0725 11:16:32.336595    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:32.336653    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:32.347172    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:32.347188    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:32.347194    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:32.382080    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:32.382094    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:32.406391    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:32.406403    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:32.411087    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:32.411096    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:32.424348    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:32.424359    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:32.443844    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:32.443854    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:32.458496    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:32.458505    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:32.469919    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:32.469934    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:32.481796    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:32.481808    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:32.516926    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:32.516935    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:32.528901    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:32.528914    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:32.540725    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:32.540739    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:32.556910    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:32.556919    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:32.575373    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:32.575382    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:32.592662    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:32.592675    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:35.106749    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:40.108892    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:40.109123    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:40.136621    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:40.136745    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:40.155145    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:40.155222    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:40.169201    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:40.169275    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:40.180503    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:40.180577    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:40.191388    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:40.191457    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:40.202363    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:40.202428    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:40.215256    4677 logs.go:276] 0 containers: []
	W0725 11:16:40.215266    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:40.215314    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:40.227415    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:40.227434    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:40.227439    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:40.246132    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:40.246142    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:40.258417    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:40.258429    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:40.271312    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:40.271322    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:40.284000    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:40.284013    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:40.319698    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:40.319708    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:40.331562    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:40.331572    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:40.356093    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:40.356104    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:40.367504    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:40.367515    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:40.371805    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:40.371814    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:40.391309    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:40.391319    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:40.404786    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:40.404796    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:40.419817    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:40.419826    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:40.431401    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:40.431410    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:40.442695    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:40.442708    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:42.979686    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:47.982280    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:47.982594    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:48.015534    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:48.015658    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:48.032830    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:48.032926    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:48.046713    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:48.046789    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:48.058859    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:48.058926    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:48.069736    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:48.069815    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:48.081214    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:48.081281    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:48.091909    4677 logs.go:276] 0 containers: []
	W0725 11:16:48.091921    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:48.091984    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:48.102469    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:48.102485    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:48.102491    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:48.135286    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:48.135294    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:48.171373    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:48.171385    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:48.183044    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:48.183059    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:48.195677    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:48.195688    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:48.211090    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:48.211101    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:48.222591    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:48.222604    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:48.234406    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:48.234418    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:48.246506    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:48.246519    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:48.258830    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:48.258844    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:48.270641    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:48.270652    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:48.295412    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:48.295424    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:48.299854    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:48.299863    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:48.314528    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:48.314538    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:48.328921    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:48.328933    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:50.849443    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:55.851576    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:55.851674    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:55.863307    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:55.863383    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:55.874283    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:55.874364    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:55.885808    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:55.885881    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:55.896821    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:55.896889    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:55.907552    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:55.907619    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:55.918450    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:55.918522    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:55.929274    4677 logs.go:276] 0 containers: []
	W0725 11:16:55.929285    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:55.929345    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:55.939992    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:55.940011    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:55.940016    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:55.965101    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:55.965108    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:55.976959    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:55.976972    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:55.989453    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:55.989464    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:56.007393    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:56.007405    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:56.028716    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:56.028726    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:56.033757    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:56.033765    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:56.048755    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:56.048775    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:56.061615    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:56.061625    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:56.096257    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:56.096266    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:56.109267    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:56.109279    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:56.121684    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:56.121695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:56.140533    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:56.140547    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:56.178010    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:56.178024    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:56.193055    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:56.193071    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:58.707424    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:03.709499    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:03.709676    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:03.722449    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:03.722519    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:03.733878    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:03.733946    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:03.744930    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:03.745002    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:03.755885    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:03.755952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:03.766463    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:03.766527    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:03.777197    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:03.777260    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:03.787142    4677 logs.go:276] 0 containers: []
	W0725 11:17:03.787153    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:03.787206    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:03.797555    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:03.797572    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:03.797577    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:03.830092    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:03.830103    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:03.841649    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:03.841661    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:03.852562    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:03.852576    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:03.864461    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:03.864475    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:03.876245    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:03.876257    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:03.891005    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:03.891018    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:03.914461    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:03.914469    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:03.925954    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:03.925964    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:03.930318    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:03.930325    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:03.944668    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:03.944677    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:03.958395    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:03.958406    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:03.993619    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:03.993630    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:04.006828    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:04.006838    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:04.018700    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:04.018712    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:06.538234    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:11.540329    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:11.540459    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:11.551786    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:11.551858    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:11.563357    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:11.563433    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:11.574237    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:11.574311    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:11.584714    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:11.584771    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:11.596330    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:11.596391    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:11.608063    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:11.608129    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:11.618236    4677 logs.go:276] 0 containers: []
	W0725 11:17:11.618248    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:11.618304    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:11.629133    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:11.629149    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:11.629155    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:11.633958    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:11.633965    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:11.653482    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:11.653496    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:11.671523    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:11.671532    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:11.696638    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:11.696650    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:11.730165    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:11.730177    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:11.741511    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:11.741521    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:11.752992    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:11.753005    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:11.767791    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:11.767804    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:11.780128    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:11.780138    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:11.793639    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:11.793650    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:11.830472    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:11.830484    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:11.845085    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:11.845096    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:11.857124    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:11.857139    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:11.868837    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:11.868848    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:14.383019    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:19.385057    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:19.385235    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:19.403083    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:19.403159    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:19.414685    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:19.414752    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:19.425505    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:19.425580    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:19.436698    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:19.436767    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:19.448008    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:19.448066    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:19.459265    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:19.459338    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:19.469255    4677 logs.go:276] 0 containers: []
	W0725 11:17:19.469267    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:19.469319    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:19.479607    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:19.479624    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:19.479629    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:19.491590    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:19.491602    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:19.514724    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:19.514732    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:19.529721    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:19.529731    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:19.547161    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:19.547172    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:19.558748    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:19.558760    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:19.576525    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:19.576538    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:19.588722    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:19.588733    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:19.593640    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:19.593647    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:19.632160    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:19.632171    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:19.647355    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:19.647366    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:19.661615    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:19.661625    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:19.673994    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:19.674004    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:19.707526    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:19.707534    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:19.724496    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:19.724505    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:22.238742    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:27.241195    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:27.245554    4677 out.go:177] 
	W0725 11:17:27.249684    4677 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0725 11:17:27.249692    4677 out.go:239] * 
	* 
	W0725 11:17:27.250334    4677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:17:27.261620    4677 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-159000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-25 11:17:27.354364 -0700 PDT m=+2973.317180959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-159000 -n running-upgrade-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-159000 -n running-upgrade-159000: exit status 2 (15.581563791s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-159000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-964000          | force-systemd-flag-964000 | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-029000              | force-systemd-env-029000  | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-029000           | force-systemd-env-029000  | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT | 25 Jul 24 11:07 PDT |
	| start   | -p docker-flags-463000                | docker-flags-463000       | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-964000             | force-systemd-flag-964000 | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-964000          | force-systemd-flag-964000 | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT | 25 Jul 24 11:07 PDT |
	| start   | -p cert-expiration-876000             | cert-expiration-876000    | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-463000 ssh               | docker-flags-463000       | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-463000 ssh               | docker-flags-463000       | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-463000                | docker-flags-463000       | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT | 25 Jul 24 11:07 PDT |
	| start   | -p cert-options-810000                | cert-options-810000       | jenkins | v1.33.1 | 25 Jul 24 11:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-810000 ssh               | cert-options-810000       | jenkins | v1.33.1 | 25 Jul 24 11:08 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-810000 -- sudo        | cert-options-810000       | jenkins | v1.33.1 | 25 Jul 24 11:08 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-810000                | cert-options-810000       | jenkins | v1.33.1 | 25 Jul 24 11:08 PDT | 25 Jul 24 11:08 PDT |
	| start   | -p running-upgrade-159000             | minikube                  | jenkins | v1.26.0 | 25 Jul 24 11:08 PDT | 25 Jul 24 11:09 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-159000             | running-upgrade-159000    | jenkins | v1.33.1 | 25 Jul 24 11:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-876000             | cert-expiration-876000    | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-876000             | cert-expiration-876000    | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT | 25 Jul 24 11:11 PDT |
	| start   | -p kubernetes-upgrade-567000          | kubernetes-upgrade-567000 | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-567000          | kubernetes-upgrade-567000 | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT | 25 Jul 24 11:11 PDT |
	| start   | -p kubernetes-upgrade-567000          | kubernetes-upgrade-567000 | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-567000          | kubernetes-upgrade-567000 | jenkins | v1.33.1 | 25 Jul 24 11:11 PDT | 25 Jul 24 11:11 PDT |
	| start   | -p stopped-upgrade-820000             | minikube                  | jenkins | v1.26.0 | 25 Jul 24 11:11 PDT | 25 Jul 24 11:12 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-820000 stop           | minikube                  | jenkins | v1.26.0 | 25 Jul 24 11:12 PDT | 25 Jul 24 11:12 PDT |
	| start   | -p stopped-upgrade-820000             | stopped-upgrade-820000    | jenkins | v1.33.1 | 25 Jul 24 11:12 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 11:12:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 11:12:19.842183    4843 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:12:19.842351    4843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:12:19.842356    4843 out.go:304] Setting ErrFile to fd 2...
	I0725 11:12:19.842363    4843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:12:19.842545    4843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:12:19.843739    4843 out.go:298] Setting JSON to false
	I0725 11:12:19.863206    4843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4303,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:12:19.863277    4843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:12:19.868533    4843 out.go:177] * [stopped-upgrade-820000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:12:19.876525    4843 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:12:19.876578    4843 notify.go:220] Checking for updates...
	I0725 11:12:19.883470    4843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:12:19.886511    4843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:12:19.889491    4843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:12:19.892484    4843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:12:19.895484    4843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:12:19.897190    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:12:19.900393    4843 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 11:12:19.903490    4843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:12:19.907327    4843 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:12:19.914447    4843 start.go:297] selected driver: qemu2
	I0725 11:12:19.914454    4843 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:19.914507    4843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:12:19.917078    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:12:19.917097    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:12:19.917136    4843 start.go:340] cluster config:
	{Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:19.917207    4843 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:12:19.924438    4843 out.go:177] * Starting "stopped-upgrade-820000" primary control-plane node in "stopped-upgrade-820000" cluster
	I0725 11:12:19.928458    4843 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:12:19.928472    4843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0725 11:12:19.928477    4843 cache.go:56] Caching tarball of preloaded images
	I0725 11:12:19.928532    4843 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:12:19.928537    4843 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0725 11:12:19.928584    4843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/config.json ...
	I0725 11:12:19.928993    4843 start.go:360] acquireMachinesLock for stopped-upgrade-820000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:12:19.929020    4843 start.go:364] duration metric: took 21.083µs to acquireMachinesLock for "stopped-upgrade-820000"
	I0725 11:12:19.929029    4843 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:12:19.929034    4843 fix.go:54] fixHost starting: 
	I0725 11:12:19.929139    4843 fix.go:112] recreateIfNeeded on stopped-upgrade-820000: state=Stopped err=<nil>
	W0725 11:12:19.929148    4843 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:12:19.936483    4843 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-820000" ...
	I0725 11:12:22.714610    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:22.715001    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:22.756221    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:22.756343    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:22.778596    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:22.778679    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:22.792609    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:22.792682    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:22.804969    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:22.805040    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:22.815930    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:22.815996    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:22.826920    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:22.826990    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:22.837542    4677 logs.go:276] 0 containers: []
	W0725 11:12:22.837556    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:22.837607    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:22.848157    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:22.848174    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:22.848179    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:22.872736    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:22.872745    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:22.889021    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:22.889034    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:22.901845    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:22.901859    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:22.925607    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:22.925614    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:22.960773    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:22.960786    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:22.972614    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:22.972629    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:22.990376    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:22.990387    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:23.019055    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:23.019068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:23.031491    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:23.031504    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:23.043771    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:23.043781    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:23.048653    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:23.048662    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:23.062976    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:23.062986    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:23.078071    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:23.078084    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:23.091710    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:23.091723    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:23.105548    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:23.105559    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:23.119124    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:23.119134    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:19.939410    4843 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:12:19.939475    4843 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-820000 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/disk.qcow2
	I0725 11:12:19.985166    4843 main.go:141] libmachine: STDOUT: 
	I0725 11:12:19.985191    4843 main.go:141] libmachine: STDERR: 
	I0725 11:12:19.985197    4843 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0725 11:12:25.658279    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:30.660372    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:30.660488    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:30.672271    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:30.672344    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:30.682937    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:30.683009    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:30.693257    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:30.693325    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:30.709179    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:30.709249    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:30.720478    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:30.720544    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:30.731506    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:30.731571    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:30.741823    4677 logs.go:276] 0 containers: []
	W0725 11:12:30.741839    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:30.741891    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:30.752790    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:30.752808    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:30.752814    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:30.778338    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:30.778363    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:30.794788    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:30.794802    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:30.812474    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:30.812488    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:30.834061    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:30.834075    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:30.848054    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:30.848067    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:30.854666    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:30.854681    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:30.874280    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:30.874301    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:30.888065    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:30.888077    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:30.938134    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:30.938147    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:30.952279    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:30.952292    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:30.965703    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:30.965713    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:30.989289    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:30.989300    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:31.029626    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:31.029649    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:31.045808    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:31.045827    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:31.059482    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:31.059494    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:31.083963    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:31.083984    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:33.599823    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:38.602086    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:38.602590    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:38.642218    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:38.642355    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:38.663278    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:38.663375    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:38.678581    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:38.678655    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:38.691107    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:38.691180    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:38.702243    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:38.702303    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:38.712692    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:38.712761    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:38.723006    4677 logs.go:276] 0 containers: []
	W0725 11:12:38.723016    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:38.723069    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:38.738964    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:38.739000    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:38.739008    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:38.760210    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:38.760224    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:38.801249    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:38.801264    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:38.815141    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:38.815151    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:38.833158    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:38.833169    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:38.871109    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:38.871127    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:38.876279    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:38.876290    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:38.888285    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:38.888298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:38.903692    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:38.903702    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:38.915743    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:38.915753    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:38.927523    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:38.927536    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:38.945910    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:38.945925    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:38.957118    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:38.957129    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:38.999284    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:38.999298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:39.011098    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:39.011109    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:39.027586    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:39.027600    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:39.050009    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:39.050018    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:39.856243    4843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/config.json ...
	I0725 11:12:39.856631    4843 machine.go:94] provisionDockerMachine start ...
	I0725 11:12:39.856703    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:39.856950    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:39.856960    4843 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 11:12:39.931575    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 11:12:39.931597    4843 buildroot.go:166] provisioning hostname "stopped-upgrade-820000"
	I0725 11:12:39.931668    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:39.931845    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:39.931855    4843 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-820000 && echo "stopped-upgrade-820000" | sudo tee /etc/hostname
	I0725 11:12:40.006149    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-820000
	
	I0725 11:12:40.006206    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.006342    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.006353    4843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-820000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-820000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-820000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 11:12:40.071303    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 11:12:40.071313    4843 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1196/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1196/.minikube}
	I0725 11:12:40.071320    4843 buildroot.go:174] setting up certificates
	I0725 11:12:40.071325    4843 provision.go:84] configureAuth start
	I0725 11:12:40.071336    4843 provision.go:143] copyHostCerts
	I0725 11:12:40.071406    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem, removing ...
	I0725 11:12:40.071414    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem
	I0725 11:12:40.071510    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem (1078 bytes)
	I0725 11:12:40.071678    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem, removing ...
	I0725 11:12:40.071683    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem
	I0725 11:12:40.071724    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem (1123 bytes)
	I0725 11:12:40.071823    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem, removing ...
	I0725 11:12:40.071827    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem
	I0725 11:12:40.071867    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem (1675 bytes)
	I0725 11:12:40.071952    4843 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-820000 san=[127.0.0.1 localhost minikube stopped-upgrade-820000]
	I0725 11:12:40.140982    4843 provision.go:177] copyRemoteCerts
	I0725 11:12:40.141033    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 11:12:40.141042    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.174789    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 11:12:40.182053    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0725 11:12:40.189025    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 11:12:40.195649    4843 provision.go:87] duration metric: took 124.323292ms to configureAuth
	I0725 11:12:40.195661    4843 buildroot.go:189] setting minikube options for container-runtime
	I0725 11:12:40.195769    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:12:40.195807    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.195894    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.195898    4843 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 11:12:40.260623    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 11:12:40.260631    4843 buildroot.go:70] root file system type: tmpfs
	I0725 11:12:40.260686    4843 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 11:12:40.260739    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.260866    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.260903    4843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 11:12:40.332436    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 11:12:40.332494    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.332610    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.332620    4843 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 11:12:40.688212    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 11:12:40.688227    4843 machine.go:97] duration metric: took 831.612833ms to provisionDockerMachine
	I0725 11:12:40.688234    4843 start.go:293] postStartSetup for "stopped-upgrade-820000" (driver="qemu2")
	I0725 11:12:40.688241    4843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 11:12:40.688310    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 11:12:40.688321    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.724789    4843 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 11:12:40.725976    4843 info.go:137] Remote host: Buildroot 2021.02.12
	I0725 11:12:40.725982    4843 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/addons for local assets ...
	I0725 11:12:40.726051    4843 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/files for local assets ...
	I0725 11:12:40.726144    4843 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem -> 16942.pem in /etc/ssl/certs
	I0725 11:12:40.726235    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 11:12:40.728590    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:12:40.735258    4843 start.go:296] duration metric: took 47.020416ms for postStartSetup
	I0725 11:12:40.735272    4843 fix.go:56] duration metric: took 20.806855792s for fixHost
	I0725 11:12:40.735306    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.735410    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.735415    4843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0725 11:12:40.800100    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721931161.020804296
	
	I0725 11:12:40.800107    4843 fix.go:216] guest clock: 1721931161.020804296
	I0725 11:12:40.800112    4843 fix.go:229] Guest: 2024-07-25 11:12:41.020804296 -0700 PDT Remote: 2024-07-25 11:12:40.735274 -0700 PDT m=+20.925081542 (delta=285.530296ms)
	I0725 11:12:40.800125    4843 fix.go:200] guest clock delta is within tolerance: 285.530296ms
	I0725 11:12:40.800129    4843 start.go:83] releasing machines lock for "stopped-upgrade-820000", held for 20.871722334s
	I0725 11:12:40.800190    4843 ssh_runner.go:195] Run: cat /version.json
	I0725 11:12:40.800203    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.800190    4843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 11:12:40.800242    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	W0725 11:12:40.800782    4843 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0725 11:12:40.800805    4843 retry.go:31] will retry after 251.104985ms: dial tcp [::1]:50463: connect: connection refused
	W0725 11:12:41.086975    4843 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0725 11:12:41.087056    4843 ssh_runner.go:195] Run: systemctl --version
	I0725 11:12:41.089049    4843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 11:12:41.090688    4843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 11:12:41.090721    4843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0725 11:12:41.093692    4843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0725 11:12:41.099405    4843 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 11:12:41.099416    4843 start.go:495] detecting cgroup driver to use...
	I0725 11:12:41.099493    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:12:41.109062    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0725 11:12:41.113072    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 11:12:41.118077    4843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 11:12:41.118133    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 11:12:41.121721    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:12:41.124698    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 11:12:41.127519    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:12:41.130709    4843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 11:12:41.134207    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 11:12:41.137632    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 11:12:41.140627    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 11:12:41.143532    4843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 11:12:41.146676    4843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 11:12:41.150867    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:41.226461    4843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 11:12:41.232742    4843 start.go:495] detecting cgroup driver to use...
	I0725 11:12:41.232798    4843 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 11:12:41.241342    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:12:41.245770    4843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 11:12:41.251523    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:12:41.256366    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:12:41.261172    4843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 11:12:41.321126    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:12:41.326511    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:12:41.332267    4843 ssh_runner.go:195] Run: which cri-dockerd
	I0725 11:12:41.333398    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 11:12:41.335871    4843 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 11:12:41.340594    4843 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 11:12:41.418379    4843 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 11:12:41.497638    4843 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 11:12:41.497701    4843 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 11:12:41.503121    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:41.570848    4843 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:12:42.733934    4843 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163102833s)
	I0725 11:12:42.734003    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 11:12:42.738805    4843 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 11:12:42.745017    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:12:42.749580    4843 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 11:12:42.833603    4843 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 11:12:42.910192    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:42.986321    4843 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 11:12:42.992525    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:12:42.997441    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:43.075957    4843 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 11:12:43.113887    4843 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 11:12:43.113971    4843 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 11:12:43.115986    4843 start.go:563] Will wait 60s for crictl version
	I0725 11:12:43.116036    4843 ssh_runner.go:195] Run: which crictl
	I0725 11:12:43.117484    4843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 11:12:43.131681    4843 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0725 11:12:43.131743    4843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:12:43.147165    4843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:12:41.564124    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:43.169864    4843 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0725 11:12:43.169992    4843 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0725 11:12:43.171188    4843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 11:12:43.174787    4843 kubeadm.go:883] updating cluster {Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0725 11:12:43.174832    4843 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:12:43.174879    4843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:12:43.184998    4843 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:12:43.185007    4843 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:12:43.185049    4843 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:12:43.188140    4843 ssh_runner.go:195] Run: which lz4
	I0725 11:12:43.189418    4843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0725 11:12:43.190658    4843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 11:12:43.190678    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0725 11:12:44.078017    4843 docker.go:649] duration metric: took 888.655959ms to copy over tarball
	I0725 11:12:44.078074    4843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 11:12:46.566169    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:46.566266    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:46.577375    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:46.577438    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:46.589484    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:46.589545    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:46.603003    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:46.603074    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:46.614174    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:46.614251    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:46.625013    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:46.625083    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:46.636081    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:46.636152    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:46.647121    4677 logs.go:276] 0 containers: []
	W0725 11:12:46.647138    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:46.647198    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:46.659662    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:46.659681    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:46.659686    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:46.698641    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:46.698652    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:46.737263    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:46.737276    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:46.763376    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:46.763388    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:46.781832    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:46.781844    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:46.797601    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:46.797614    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:46.822252    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:46.822262    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:46.844426    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:46.844439    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:46.849289    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:46.849301    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:46.870194    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:46.870213    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:46.885680    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:46.885691    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:46.898625    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:46.898638    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:46.911564    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:46.911577    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:46.926350    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:46.926363    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:46.940241    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:46.940254    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:46.957246    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:46.957261    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:46.970563    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:46.970575    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:49.489278    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:45.236532    4843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.158478125s)
	I0725 11:12:45.236545    4843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 11:12:45.251835    4843 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:12:45.255095    4843 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0725 11:12:45.260519    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:45.342527    4843 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:12:46.807772    4843 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.465269625s)
	I0725 11:12:46.807865    4843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:12:46.819533    4843 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:12:46.819543    4843 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:12:46.819548    4843 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 11:12:46.823647    4843 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:46.825754    4843 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:46.827702    4843 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:46.827711    4843 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:46.830017    4843 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:46.830198    4843 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:46.832565    4843 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:46.832617    4843 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:46.834474    4843 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:46.834595    4843 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:46.836000    4843 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:46.836039    4843 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:46.837498    4843 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:46.837524    4843 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0725 11:12:46.838366    4843 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:46.839744    4843 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0725 11:12:47.283601    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.286246    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.296015    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.299023    4843 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0725 11:12:47.299060    4843 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.299108    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.311874    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:47.322992    4843 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0725 11:12:47.323013    4843 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.323065    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.323207    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0725 11:12:47.323335    4843 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0725 11:12:47.323345    4843 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.323367    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.327444    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.331144    4843 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0725 11:12:47.331165    4843 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:47.331210    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0725 11:12:47.337134    4843 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0725 11:12:47.337287    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.343610    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0725 11:12:47.346622    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0725 11:12:47.346997    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0725 11:12:47.358934    4843 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0725 11:12:47.358957    4843 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.358959    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0725 11:12:47.359003    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.364266    4843 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0725 11:12:47.364290    4843 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.364343    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.378681    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0725 11:12:47.378677    4843 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0725 11:12:47.378741    4843 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0725 11:12:47.378781    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0725 11:12:47.378787    4843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:12:47.385282    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0725 11:12:47.385400    4843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:12:47.389762    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0725 11:12:47.389778    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0725 11:12:47.389789    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0725 11:12:47.389835    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0725 11:12:47.389842    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0725 11:12:47.389857    4843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0725 11:12:47.400914    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0725 11:12:47.400944    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0725 11:12:47.438416    4843 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0725 11:12:47.438430    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0725 11:12:47.541537    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0725 11:12:47.541555    4843 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:12:47.541561    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0725 11:12:47.625886    4843 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0725 11:12:47.626001    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.656363    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0725 11:12:47.659181    4843 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0725 11:12:47.659204    4843 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.659259    4843 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.682808    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 11:12:47.682934    4843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:12:47.695433    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 11:12:47.695459    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0725 11:12:47.742397    4843 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:12:47.742416    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0725 11:12:47.880765    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0725 11:12:47.880790    4843 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:12:47.880796    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0725 11:12:48.112279    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 11:12:48.112318    4843 cache_images.go:92] duration metric: took 1.292801417s to LoadCachedImages
	W0725 11:12:48.112360    4843 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0725 11:12:48.112365    4843 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0725 11:12:48.112421    4843 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-820000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 11:12:48.112481    4843 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 11:12:48.127716    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:12:48.127730    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:12:48.127734    4843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 11:12:48.127743    4843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-820000 NodeName:stopped-upgrade-820000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 11:12:48.127802    4843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-820000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 11:12:48.127856    4843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0725 11:12:48.131058    4843 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 11:12:48.131084    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 11:12:48.134194    4843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0725 11:12:48.139379    4843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 11:12:48.144486    4843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0725 11:12:48.149457    4843 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0725 11:12:48.150694    4843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 11:12:48.154784    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:48.235994    4843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:12:48.245810    4843 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000 for IP: 10.0.2.15
	I0725 11:12:48.245820    4843 certs.go:194] generating shared ca certs ...
	I0725 11:12:48.245828    4843 certs.go:226] acquiring lock for ca certs: {Name:mk89636080cfada095e98fa6c0bd32580553affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.246012    4843 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key
	I0725 11:12:48.246050    4843 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key
	I0725 11:12:48.246060    4843 certs.go:256] generating profile certs ...
	I0725 11:12:48.246131    4843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key
	I0725 11:12:48.246149    4843 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42
	I0725 11:12:48.246159    4843 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0725 11:12:48.337978    4843 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 ...
	I0725 11:12:48.337991    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42: {Name:mkebcf6c4eabab22499b8d04e2fb92fba722ab86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.338302    4843 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42 ...
	I0725 11:12:48.338307    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42: {Name:mk0ade813cce628ed63ee06b37d15229e2dc78bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.338440    4843 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt
	I0725 11:12:48.338745    4843 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key
	I0725 11:12:48.338901    4843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.key
	I0725 11:12:48.339054    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem (1338 bytes)
	W0725 11:12:48.339081    4843 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694_empty.pem, impossibly tiny 0 bytes
	I0725 11:12:48.339086    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 11:12:48.339106    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem (1078 bytes)
	I0725 11:12:48.339125    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem (1123 bytes)
	I0725 11:12:48.339147    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem (1675 bytes)
	I0725 11:12:48.339194    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:12:48.339553    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 11:12:48.346730    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 11:12:48.353803    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 11:12:48.361118    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 11:12:48.368106    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 11:12:48.374923    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 11:12:48.382314    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 11:12:48.389327    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 11:12:48.396085    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 11:12:48.402758    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem --> /usr/share/ca-certificates/1694.pem (1338 bytes)
	I0725 11:12:48.410162    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /usr/share/ca-certificates/16942.pem (1708 bytes)
	I0725 11:12:48.417153    4843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 11:12:48.422302    4843 ssh_runner.go:195] Run: openssl version
	I0725 11:12:48.424333    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 11:12:48.427350    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.428882    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.428903    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.430632    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 11:12:48.434005    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694.pem && ln -fs /usr/share/ca-certificates/1694.pem /etc/ssl/certs/1694.pem"
	I0725 11:12:48.436860    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.438202    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:36 /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.438221    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.440296    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694.pem /etc/ssl/certs/51391683.0"
	I0725 11:12:48.443298    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16942.pem && ln -fs /usr/share/ca-certificates/16942.pem /etc/ssl/certs/16942.pem"
	I0725 11:12:48.446552    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.447974    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:36 /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.447996    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.449676    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16942.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 11:12:48.452497    4843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 11:12:48.453966    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 11:12:48.456094    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 11:12:48.457836    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 11:12:48.460321    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 11:12:48.461962    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 11:12:48.463632    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 11:12:48.465509    4843 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:48.465579    4843 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:12:48.477919    4843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 11:12:48.481073    4843 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 11:12:48.481082    4843 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 11:12:48.481107    4843 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 11:12:48.483809    4843 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:12:48.484088    4843 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-820000" does not appear in /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:12:48.484187    4843 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1196/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-820000" cluster setting kubeconfig missing "stopped-upgrade-820000" context setting]
	I0725 11:12:48.484404    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.484881    4843 kapi.go:59] client config for stopped-upgrade-820000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106493fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:12:48.485301    4843 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 11:12:48.487820    4843 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-820000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0725 11:12:48.487825    4843 kubeadm.go:1160] stopping kube-system containers ...
	I0725 11:12:48.487861    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:12:48.498279    4843 docker.go:483] Stopping containers: [42523f7ee731 84ce05051b4f 255915f3e59c 10b2277d1125 7b567558ab7f 9c1204c98245 34a564d49a8e d27309cceaaf]
	I0725 11:12:48.498336    4843 ssh_runner.go:195] Run: docker stop 42523f7ee731 84ce05051b4f 255915f3e59c 10b2277d1125 7b567558ab7f 9c1204c98245 34a564d49a8e d27309cceaaf
	I0725 11:12:48.508833    4843 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 11:12:48.514448    4843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:12:48.517839    4843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:12:48.517844    4843 kubeadm.go:157] found existing configuration files:
	
	I0725 11:12:48.517868    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0725 11:12:48.520366    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:12:48.520389    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:12:48.523006    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0725 11:12:48.526238    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:12:48.526261    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:12:48.529048    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0725 11:12:48.531579    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:12:48.531598    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:12:48.534620    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0725 11:12:48.537667    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:12:48.537691    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:12:48.540201    4843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:12:48.542952    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:48.564793    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.176445    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.298042    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.323775    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.342474    4843 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:12:49.342554    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:54.490899    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:54.491007    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:12:54.502368    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:12:54.502439    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:12:54.512616    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:12:54.512682    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:12:54.523273    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:12:54.523336    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:12:54.534091    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:12:54.534163    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:12:49.843758    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:50.343548    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:50.347820    4843 api_server.go:72] duration metric: took 1.005377125s to wait for apiserver process to appear ...
	I0725 11:12:50.347830    4843 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:12:50.347844    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:54.548943    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:12:54.549006    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:12:54.559413    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:12:54.559473    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:12:54.571874    4677 logs.go:276] 0 containers: []
	W0725 11:12:54.571886    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:12:54.571949    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:12:54.582642    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:12:54.582660    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:12:54.582678    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:12:54.594314    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:12:54.594326    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:12:54.632849    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:12:54.632860    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:12:54.636830    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:12:54.636839    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:12:54.674937    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:12:54.674948    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:12:54.703973    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:12:54.703997    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:12:54.717966    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:12:54.717980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:12:54.732762    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:12:54.732773    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:12:54.744012    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:12:54.744022    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:12:54.755771    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:12:54.755782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:12:54.768142    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:12:54.768153    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:12:54.779586    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:12:54.779597    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:12:54.794203    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:12:54.794211    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:12:54.809195    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:12:54.809204    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:12:54.827010    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:12:54.827022    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:12:54.851318    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:12:54.851325    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:12:54.862792    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:12:54.862803    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:12:57.378441    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:55.348377    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:55.348406    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:02.380467    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:02.380674    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:02.392759    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:13:02.392835    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:02.403829    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:13:02.403903    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:02.414767    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:13:02.414839    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:02.424905    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:13:02.424978    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:02.435508    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:13:02.435573    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:02.446240    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:13:02.446329    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:02.456259    4677 logs.go:276] 0 containers: []
	W0725 11:13:02.456271    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:02.456328    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:02.467002    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:13:02.467021    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:13:02.467026    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:13:02.481186    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:13:02.481196    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:13:02.495700    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:13:02.495710    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:13:02.511857    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:13:02.511867    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:13:02.523977    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:02.523988    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:02.545821    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:13:02.545828    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:02.557702    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:02.557714    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:02.594159    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:13:02.594167    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:13:02.611969    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:13:02.611980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:13:02.624543    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:13:02.624552    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:13:02.636145    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:13:02.636155    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:13:02.653195    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:13:02.653204    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:13:02.664996    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:02.665008    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:02.669290    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:13:02.669296    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:13:02.694406    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:13:02.694420    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:13:02.705502    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:13:02.705515    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:13:02.721891    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:02.721901    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:00.348904    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:00.348987    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:05.257723    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:05.349448    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:05.349466    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:10.259825    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:10.260031    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:10.279749    4677 logs.go:276] 2 containers: [3cdf8608080a ede47b8eaf34]
	I0725 11:13:10.279853    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:10.294504    4677 logs.go:276] 2 containers: [1f2859256982 af52e586dda6]
	I0725 11:13:10.294576    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:10.311107    4677 logs.go:276] 1 containers: [ebb774ad6eb2]
	I0725 11:13:10.311182    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:10.321995    4677 logs.go:276] 2 containers: [49a212172aa6 7f5ae9df8f8f]
	I0725 11:13:10.322064    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:10.332533    4677 logs.go:276] 1 containers: [371c23c6e4b9]
	I0725 11:13:10.332608    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:10.343557    4677 logs.go:276] 2 containers: [660e6c467c36 0f72f05bb585]
	I0725 11:13:10.343624    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:10.354052    4677 logs.go:276] 0 containers: []
	W0725 11:13:10.354060    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:10.354112    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:10.365121    4677 logs.go:276] 2 containers: [3cc50156a9d7 baeb53ef6c11]
	I0725 11:13:10.365139    4677 logs.go:123] Gathering logs for storage-provisioner [baeb53ef6c11] ...
	I0725 11:13:10.365145    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 baeb53ef6c11"
	I0725 11:13:10.376856    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:13:10.376868    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:10.393805    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:10.393817    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:10.431744    4677 logs.go:123] Gathering logs for etcd [af52e586dda6] ...
	I0725 11:13:10.431752    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af52e586dda6"
	I0725 11:13:10.453052    4677 logs.go:123] Gathering logs for kube-controller-manager [660e6c467c36] ...
	I0725 11:13:10.453068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 660e6c467c36"
	I0725 11:13:10.470170    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:10.470181    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:10.474346    4677 logs.go:123] Gathering logs for kube-scheduler [7f5ae9df8f8f] ...
	I0725 11:13:10.474352    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f5ae9df8f8f"
	I0725 11:13:10.489883    4677 logs.go:123] Gathering logs for kube-scheduler [49a212172aa6] ...
	I0725 11:13:10.489896    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49a212172aa6"
	I0725 11:13:10.502081    4677 logs.go:123] Gathering logs for kube-proxy [371c23c6e4b9] ...
	I0725 11:13:10.502090    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371c23c6e4b9"
	I0725 11:13:10.514118    4677 logs.go:123] Gathering logs for kube-controller-manager [0f72f05bb585] ...
	I0725 11:13:10.514130    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0f72f05bb585"
	I0725 11:13:10.525857    4677 logs.go:123] Gathering logs for storage-provisioner [3cc50156a9d7] ...
	I0725 11:13:10.525869    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cc50156a9d7"
	I0725 11:13:10.544911    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:10.544923    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:10.590480    4677 logs.go:123] Gathering logs for kube-apiserver [3cdf8608080a] ...
	I0725 11:13:10.590490    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3cdf8608080a"
	I0725 11:13:10.604778    4677 logs.go:123] Gathering logs for etcd [1f2859256982] ...
	I0725 11:13:10.604791    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f2859256982"
	I0725 11:13:10.623004    4677 logs.go:123] Gathering logs for kube-apiserver [ede47b8eaf34] ...
	I0725 11:13:10.623017    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ede47b8eaf34"
	I0725 11:13:10.647695    4677 logs.go:123] Gathering logs for coredns [ebb774ad6eb2] ...
	I0725 11:13:10.647708    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebb774ad6eb2"
	I0725 11:13:10.659233    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:10.659245    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:13.183392    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:10.349647    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:10.349665    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:18.185508    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:18.185566    4677 kubeadm.go:597] duration metric: took 4m4.000448125s to restartPrimaryControlPlane
	W0725 11:13:18.185614    4677 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 11:13:18.185637    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 11:13:19.191312    4677 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.005689666s)
	I0725 11:13:19.191388    4677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:13:19.196467    4677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:13:19.199432    4677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:13:19.202666    4677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:13:19.202672    4677 kubeadm.go:157] found existing configuration files:
	
	I0725 11:13:19.202700    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf
	I0725 11:13:19.205189    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:13:19.205213    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:13:19.207982    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf
	I0725 11:13:19.210981    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:13:19.211005    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:13:19.214355    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf
	I0725 11:13:19.216787    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:13:19.216812    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:13:19.219812    4677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf
	I0725 11:13:19.222876    4677 kubeadm.go:163] "https://control-plane.minikube.internal:50303" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50303 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:13:19.222897    4677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:13:19.225700    4677 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 11:13:19.241985    4677 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0725 11:13:19.242078    4677 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 11:13:19.289504    4677 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 11:13:19.289562    4677 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 11:13:19.289622    4677 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 11:13:19.340546    4677 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 11:13:19.344685    4677 out.go:204]   - Generating certificates and keys ...
	I0725 11:13:19.344725    4677 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 11:13:19.344754    4677 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 11:13:19.344789    4677 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 11:13:19.344824    4677 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 11:13:19.344862    4677 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 11:13:19.344890    4677 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 11:13:19.344920    4677 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 11:13:19.344949    4677 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 11:13:19.344982    4677 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 11:13:19.345015    4677 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 11:13:19.345032    4677 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 11:13:19.345057    4677 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 11:13:19.397345    4677 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 11:13:15.349863    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:15.349910    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:19.604011    4677 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 11:13:19.707080    4677 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 11:13:19.846052    4677 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 11:13:19.883115    4677 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 11:13:19.883514    4677 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 11:13:19.883587    4677 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 11:13:19.969998    4677 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 11:13:19.974197    4677 out.go:204]   - Booting up control plane ...
	I0725 11:13:19.974245    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 11:13:19.974286    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 11:13:19.974317    4677 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 11:13:19.974379    4677 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 11:13:19.974560    4677 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 11:13:20.350271    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:20.350298    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:24.979045    4677 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.005678 seconds
	I0725 11:13:24.979261    4677 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 11:13:24.989802    4677 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 11:13:25.499257    4677 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 11:13:25.499360    4677 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-159000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 11:13:26.007921    4677 kubeadm.go:310] [bootstrap-token] Using token: yq65yj.7wo91qypo083m5v5
	I0725 11:13:26.014304    4677 out.go:204]   - Configuring RBAC rules ...
	I0725 11:13:26.014407    4677 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 11:13:26.014530    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 11:13:26.021423    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 11:13:26.022983    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 11:13:26.024696    4677 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 11:13:26.026489    4677 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 11:13:26.031234    4677 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 11:13:26.205807    4677 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 11:13:26.413676    4677 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 11:13:26.414128    4677 kubeadm.go:310] 
	I0725 11:13:26.414163    4677 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 11:13:26.414167    4677 kubeadm.go:310] 
	I0725 11:13:26.414218    4677 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 11:13:26.414224    4677 kubeadm.go:310] 
	I0725 11:13:26.414247    4677 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 11:13:26.414292    4677 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 11:13:26.414326    4677 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 11:13:26.414331    4677 kubeadm.go:310] 
	I0725 11:13:26.414370    4677 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 11:13:26.414376    4677 kubeadm.go:310] 
	I0725 11:13:26.414407    4677 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 11:13:26.414410    4677 kubeadm.go:310] 
	I0725 11:13:26.414446    4677 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 11:13:26.414511    4677 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 11:13:26.414560    4677 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 11:13:26.414563    4677 kubeadm.go:310] 
	I0725 11:13:26.414620    4677 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 11:13:26.414663    4677 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 11:13:26.414672    4677 kubeadm.go:310] 
	I0725 11:13:26.414718    4677 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yq65yj.7wo91qypo083m5v5 \
	I0725 11:13:26.414799    4677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 \
	I0725 11:13:26.414811    4677 kubeadm.go:310] 	--control-plane 
	I0725 11:13:26.414819    4677 kubeadm.go:310] 
	I0725 11:13:26.414880    4677 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 11:13:26.414884    4677 kubeadm.go:310] 
	I0725 11:13:26.414941    4677 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yq65yj.7wo91qypo083m5v5 \
	I0725 11:13:26.415016    4677 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 
	I0725 11:13:26.415083    4677 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 11:13:26.415091    4677 cni.go:84] Creating CNI manager for ""
	I0725 11:13:26.415100    4677 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:13:26.419167    4677 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 11:13:26.426119    4677 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 11:13:26.429977    4677 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 11:13:26.434872    4677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 11:13:26.434925    4677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 11:13:26.434941    4677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-159000 minikube.k8s.io/updated_at=2024_07_25T11_13_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=running-upgrade-159000 minikube.k8s.io/primary=true
	I0725 11:13:26.439049    4677 ops.go:34] apiserver oom_adj: -16
	I0725 11:13:26.477982    4677 kubeadm.go:1113] duration metric: took 43.101791ms to wait for elevateKubeSystemPrivileges
	I0725 11:13:26.478112    4677 kubeadm.go:394] duration metric: took 4m12.306996458s to StartCluster
	I0725 11:13:26.478124    4677 settings.go:142] acquiring lock: {Name:mk9c0f6a74d3ffd78a971cee1d6827e5c0e0b5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:13:26.478210    4677 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:13:26.478589    4677 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:13:26.478814    4677 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:13:26.478821    4677 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 11:13:26.478866    4677 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-159000"
	I0725 11:13:26.478879    4677 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-159000"
	W0725 11:13:26.478886    4677 addons.go:243] addon storage-provisioner should already be in state true
	I0725 11:13:26.478898    4677 host.go:66] Checking if "running-upgrade-159000" exists ...
	I0725 11:13:26.478925    4677 config.go:182] Loaded profile config "running-upgrade-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:13:26.478927    4677 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-159000"
	I0725 11:13:26.478938    4677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-159000"
	I0725 11:13:26.479920    4677 kapi.go:59] client config for running-upgrade-159000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/running-upgrade-159000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1059a3fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:13:26.480050    4677 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-159000"
	W0725 11:13:26.480055    4677 addons.go:243] addon default-storageclass should already be in state true
	I0725 11:13:26.480062    4677 host.go:66] Checking if "running-upgrade-159000" exists ...
	I0725 11:13:26.483143    4677 out.go:177] * Verifying Kubernetes components...
	I0725 11:13:26.483455    4677 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 11:13:26.487199    4677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 11:13:26.487206    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:13:26.491034    4677 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:13:26.495077    4677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:13:26.499121    4677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:13:26.499129    4677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 11:13:26.499136    4677 sshutil.go:53] new ssh client: &{IP:localhost Port:50271 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/running-upgrade-159000/id_rsa Username:docker}
	I0725 11:13:26.586218    4677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:13:26.591158    4677 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:13:26.591195    4677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:13:26.595460    4677 api_server.go:72] duration metric: took 116.63975ms to wait for apiserver process to appear ...
	I0725 11:13:26.595468    4677 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:13:26.595474    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:26.633520    4677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 11:13:26.646026    4677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:13:25.350850    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:25.350901    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:31.595784    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:31.595861    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:30.351722    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:30.351769    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:36.596242    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:36.596268    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:35.352934    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:35.353019    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:41.597121    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:41.597153    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:40.354656    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:40.354679    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:46.597278    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:46.597328    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:45.354882    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:45.354964    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:51.597629    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:51.597675    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:50.356902    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:50.357021    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:50.369315    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:13:50.369381    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:50.379725    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:13:50.379796    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:50.390901    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:13:50.390983    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:50.403064    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:13:50.403153    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:50.414966    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:13:50.415033    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:50.426164    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:13:50.426230    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:50.436185    4843 logs.go:276] 0 containers: []
	W0725 11:13:50.436195    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:50.436244    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:50.446814    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:13:50.446830    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:13:50.446835    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:13:50.461544    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:13:50.461559    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:13:50.477287    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:13:50.477301    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:13:50.491903    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:13:50.491914    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:13:50.507674    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:13:50.507688    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:13:50.519121    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:50.519131    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:50.557094    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:50.557102    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:50.561048    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:13:50.561058    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:13:50.603448    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:13:50.603458    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:13:50.615718    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:13:50.615729    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:13:50.633043    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:13:50.633053    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:13:50.647974    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:13:50.647994    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:13:50.659695    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:13:50.659706    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:13:50.671688    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:50.671699    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:50.697283    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:13:50.697289    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:50.709018    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:50.709031    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:50.815033    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:13:50.815044    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:13:53.331050    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:56.598132    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:56.598190    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0725 11:13:57.025436    4677 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0725 11:13:57.028864    4677 out.go:177] * Enabled addons: storage-provisioner
	I0725 11:13:57.036711    4677 addons.go:510] duration metric: took 30.558778292s for enable addons: enabled=[storage-provisioner]
	I0725 11:13:58.333299    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:58.333534    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:58.353333    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:13:58.353419    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:58.366752    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:13:58.366821    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:58.377967    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:13:58.378045    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:58.393746    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:13:58.393810    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:58.404327    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:13:58.404395    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:58.419123    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:13:58.419196    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:58.429037    4843 logs.go:276] 0 containers: []
	W0725 11:13:58.429049    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:58.429105    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:58.439370    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:13:58.439385    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:13:58.439391    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:13:58.452964    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:13:58.452976    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:13:58.464767    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:13:58.464778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:13:58.477665    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:13:58.477679    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:13:58.498470    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:13:58.498483    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:58.510767    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:13:58.510778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:13:58.529503    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:13:58.529516    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:13:58.541302    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:13:58.541314    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:13:58.557162    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:13:58.557175    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:13:58.574182    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:58.574192    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:58.599457    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:13:58.599466    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:13:58.637950    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:13:58.637965    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:13:58.649516    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:58.649527    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:58.687582    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:58.687593    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:58.691879    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:58.691886    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:58.728806    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:13:58.728822    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:13:58.743882    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:13:58.743896    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:01.598791    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:01.598862    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:01.260615    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:06.599175    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:06.599193    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:06.261175    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:06.261389    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:06.280909    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:06.280989    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:06.295095    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:06.295167    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:06.307275    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:06.307340    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:06.317810    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:06.317880    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:06.328155    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:06.328228    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:06.339293    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:06.339362    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:06.354394    4843 logs.go:276] 0 containers: []
	W0725 11:14:06.354408    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:06.354468    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:06.365453    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:06.365470    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:06.365475    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:06.404590    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:06.404600    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:06.422988    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:06.422998    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:06.447800    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:06.447809    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:06.461900    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:06.461910    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:06.475591    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:06.475601    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:06.487377    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:06.487387    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:06.501321    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:06.501332    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:06.517232    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:06.517242    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:06.532145    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:06.532154    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:06.546903    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:06.546914    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:06.558865    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:06.558875    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:06.579631    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:06.579644    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:06.591759    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:06.591769    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:06.596379    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:06.596386    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:06.634199    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:06.634210    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:06.672173    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:06.672184    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:09.185459    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:11.599927    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:11.599950    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:14.187639    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:14.187745    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:14.199682    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:14.199758    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:14.210773    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:14.210841    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:14.221005    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:14.221064    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:14.231272    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:14.231346    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:14.241464    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:14.241530    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:14.251964    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:14.252033    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:14.261858    4843 logs.go:276] 0 containers: []
	W0725 11:14:14.261868    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:14.261927    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:14.272529    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:14.272545    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:14.272550    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:14.311441    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:14.311454    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:14.315931    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:14.315939    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:14.352933    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:14.352947    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:14.364976    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:14.364987    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:14.383141    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:14.383154    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:14.396949    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:14.396962    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:14.409286    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:14.409297    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:14.426298    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:14.426308    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:14.437529    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:14.437540    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:14.448886    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:14.448897    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:14.486701    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:14.486711    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:14.501320    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:14.501329    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:14.516255    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:14.516269    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:14.527897    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:14.527912    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:14.554845    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:14.554870    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:14.571592    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:14.571605    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:16.601070    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:16.601098    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:17.088848    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:21.602458    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:21.602479    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:22.091480    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:22.091836    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:22.129143    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:22.129290    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:22.150266    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:22.150364    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:22.165523    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:22.165602    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:22.181830    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:22.181897    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:22.192573    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:22.192640    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:22.209384    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:22.209454    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:22.219940    4843 logs.go:276] 0 containers: []
	W0725 11:14:22.219952    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:22.220012    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:22.230786    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:22.230803    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:22.230808    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:22.242148    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:22.242159    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:22.280060    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:22.280068    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:22.294384    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:22.294394    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:22.306568    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:22.306580    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:22.321357    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:22.321368    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:22.332958    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:22.332973    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:22.345346    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:22.345357    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:22.384969    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:22.384980    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:22.398625    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:22.398635    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:22.410382    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:22.410398    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:22.427471    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:22.427482    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:22.451679    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:22.451690    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:22.455826    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:22.455834    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:22.497745    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:22.497757    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:22.512390    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:22.512403    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:22.524145    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:22.524159    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:26.604190    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:26.604383    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:26.640579    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:26.640657    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:26.652517    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:26.652587    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:26.662964    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:26.663031    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:26.675988    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:26.676058    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:26.686703    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:26.686775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:26.699300    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:26.699370    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:26.710013    4677 logs.go:276] 0 containers: []
	W0725 11:14:26.710026    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:26.710080    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:26.720621    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:26.720640    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:26.720648    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:26.732554    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:26.732564    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:26.747053    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:26.747068    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:26.764211    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:26.764227    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:26.776085    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:26.776097    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:26.780796    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:26.780801    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:26.816005    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:26.816017    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:26.831039    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:26.831050    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:26.844693    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:26.844703    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:26.868130    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:26.868138    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:26.879764    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:26.879775    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:26.912613    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:26.912624    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:26.924428    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:26.924440    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:29.440707    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:25.041845    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:34.443091    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:34.443248    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:34.456441    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:34.456519    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:34.466759    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:34.466829    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:34.477463    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:34.477528    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:34.495097    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:34.495169    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:34.505423    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:34.505498    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:34.516200    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:34.516263    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:30.044255    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:30.044669    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:30.084964    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:30.085102    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:30.106455    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:30.106548    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:30.121330    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:30.121395    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:30.133991    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:30.134066    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:30.144948    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:30.145010    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:30.156022    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:30.156088    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:30.166273    4843 logs.go:276] 0 containers: []
	W0725 11:14:30.166287    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:30.166347    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:30.176462    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:30.176482    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:30.176487    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:30.191523    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:30.191532    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:30.207652    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:30.207664    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:30.211838    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:30.211847    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:30.226743    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:30.226755    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:30.239877    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:30.239889    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:30.252000    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:30.252012    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:30.270119    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:30.270129    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:30.305208    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:30.305219    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:30.317153    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:30.317165    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:30.332566    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:30.332577    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:30.357996    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:30.358008    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:30.375487    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:30.375499    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:30.389125    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:30.389141    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:30.426244    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:30.426256    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:30.468220    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:30.468233    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:30.481304    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:30.481315    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:32.995125    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:34.535113    4677 logs.go:276] 0 containers: []
	W0725 11:14:34.535123    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:34.535177    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:34.545660    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:34.545674    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:34.545679    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:34.567782    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:34.567797    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:34.579453    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:34.579467    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:34.604668    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:34.604680    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:34.618294    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:34.618305    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:34.653688    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:34.653699    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:34.689339    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:34.689352    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:34.705869    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:34.705884    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:34.717691    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:34.717706    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:34.729536    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:34.729549    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:34.741748    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:34.741765    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:34.746384    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:34.746394    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:34.761287    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:34.761298    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:37.277056    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:37.997790    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:37.997999    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:38.023246    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:38.023344    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:38.038972    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:38.039050    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:38.051579    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:38.051651    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:38.063054    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:38.063125    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:38.073563    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:38.073621    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:38.084177    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:38.084237    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:38.094039    4843 logs.go:276] 0 containers: []
	W0725 11:14:38.094051    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:38.094109    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:38.104646    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:38.104664    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:38.104670    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:38.118328    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:38.118342    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:38.133963    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:38.133973    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:38.145896    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:38.145906    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:38.183124    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:38.183134    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:38.200025    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:38.200036    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:38.214160    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:38.214169    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:38.238023    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:38.238034    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:38.241943    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:38.241950    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:38.278843    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:38.278853    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:38.295324    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:38.295335    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:38.310245    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:38.310259    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:38.332236    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:38.332247    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:38.345572    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:38.345586    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:38.390815    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:38.390829    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:38.402722    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:38.402735    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:38.414839    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:38.414850    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:42.279465    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:42.279765    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:42.307233    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:42.307341    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:42.323516    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:42.323608    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:42.336810    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:42.336879    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:42.348587    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:42.348656    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:42.359097    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:42.359173    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:42.370198    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:42.370267    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:42.380182    4677 logs.go:276] 0 containers: []
	W0725 11:14:42.380192    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:42.380243    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:42.390896    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:42.390912    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:42.390920    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:42.405611    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:42.405622    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:42.417658    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:42.417669    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:42.429266    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:42.429278    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:42.444746    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:42.444756    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:42.477620    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:42.477630    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:42.482114    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:42.482122    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:42.517849    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:42.517859    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:42.532122    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:42.532134    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:42.545252    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:42.545263    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:42.556826    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:42.556837    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:42.581263    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:42.581273    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:42.597917    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:42.597928    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:40.931708    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:45.113782    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:45.933368    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:45.933534    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:45.949593    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:45.949673    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:45.962512    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:45.962587    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:45.973530    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:45.973596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:45.984430    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:45.984501    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:45.994829    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:45.994887    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:46.005405    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:46.005467    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:46.016162    4843 logs.go:276] 0 containers: []
	W0725 11:14:46.016173    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:46.016228    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:46.030119    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:46.030139    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:46.030145    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:46.044282    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:46.044292    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:46.058361    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:46.058373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:46.073429    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:46.073439    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:46.090105    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:46.090116    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:46.104118    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:46.104128    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:46.116222    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:46.116234    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:46.155335    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:46.155349    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:46.171363    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:46.171373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:46.183135    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:46.183145    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:46.195869    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:46.195880    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:46.210591    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:46.210602    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:46.246620    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:46.246630    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:46.250667    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:46.250677    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:46.262076    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:46.262092    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:46.286321    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:46.286329    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:46.297963    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:46.297975    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:48.838332    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:50.112904    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:50.113243    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:50.139184    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:50.139299    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:50.160098    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:50.160182    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:50.173389    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:50.173458    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:50.187850    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:50.187921    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:50.198629    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:50.198702    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:50.209710    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:50.209775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:50.220432    4677 logs.go:276] 0 containers: []
	W0725 11:14:50.220442    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:50.220493    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:50.231075    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:50.231090    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:50.231095    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:50.242425    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:50.242436    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:50.255735    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:50.255748    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:50.270124    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:50.270134    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:50.287115    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:50.287125    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:50.312963    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:50.312978    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:50.347371    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:50.347383    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:50.362418    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:50.362429    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:50.376022    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:50.376034    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:50.387979    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:50.387993    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:50.399235    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:50.399248    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:50.410719    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:50.410729    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:50.447525    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:50.447539    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:52.952629    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:53.837772    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:53.837924    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:53.848948    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:53.849032    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:53.860007    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:53.860078    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:53.870445    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:53.870508    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:53.882961    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:53.883036    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:53.893283    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:53.893352    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:53.903532    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:53.903596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:53.913594    4843 logs.go:276] 0 containers: []
	W0725 11:14:53.913606    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:53.913667    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:53.924047    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:53.924066    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:53.924071    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:53.939540    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:53.939551    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:53.976493    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:53.976503    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:53.990587    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:53.990601    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:54.002206    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:54.002219    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:54.006420    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:54.006428    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:54.018563    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:54.018574    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:54.030924    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:54.030935    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:54.054916    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:54.054923    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:54.091320    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:54.091328    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:54.105707    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:54.105716    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:54.118571    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:54.118582    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:54.136227    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:54.136241    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:54.155924    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:54.155937    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:54.167356    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:54.167371    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:54.179468    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:54.179480    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:54.217965    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:54.217978    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:57.952730    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:57.953117    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:57.994358    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:14:57.994480    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:58.012215    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:14:58.012302    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:58.024783    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:14:58.024865    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:58.036444    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:14:58.036526    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:58.047150    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:14:58.047217    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:58.057369    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:14:58.057434    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:58.067680    4677 logs.go:276] 0 containers: []
	W0725 11:14:58.067691    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:58.067748    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:58.078181    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:14:58.078197    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:14:58.078202    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:14:58.094367    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:58.094380    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:58.118724    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:58.118733    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:58.152983    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:58.152994    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:58.187967    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:14:58.187980    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:14:58.202628    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:14:58.202643    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:14:58.214363    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:14:58.214377    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:14:58.226258    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:14:58.226269    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:14:58.240717    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:14:58.240728    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:58.252488    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:58.252498    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:58.257376    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:14:58.257383    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:14:58.271543    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:14:58.271556    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:14:58.293246    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:14:58.293260    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:14:56.732933    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:00.812960    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:01.733379    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:01.733539    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:01.748274    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:01.748355    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:01.765937    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:01.766006    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:01.776463    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:01.776535    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:01.787108    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:01.787178    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:01.797280    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:01.797348    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:01.808230    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:01.808296    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:01.818386    4843 logs.go:276] 0 containers: []
	W0725 11:15:01.818397    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:01.818446    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:01.833774    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:01.833793    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:01.833799    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:01.848235    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:01.848246    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:01.859723    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:01.859735    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:01.874743    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:01.874753    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:01.912469    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:01.912479    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:01.916698    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:01.916707    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:01.954087    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:01.954099    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:01.969744    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:01.969755    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:01.983356    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:01.983368    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:02.005663    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:02.005675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:02.023433    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:02.023444    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:02.034748    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:02.034761    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:02.045787    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:02.045800    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:02.058016    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:02.058027    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:02.093408    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:02.093418    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:02.109868    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:02.109882    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:02.125034    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:02.125047    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:04.651242    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:05.813871    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:05.814070    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:05.833579    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:05.833648    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:05.845839    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:05.845908    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:05.856443    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:05.856507    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:05.867223    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:05.867288    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:05.877824    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:05.877885    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:05.888577    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:05.888644    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:05.899380    4677 logs.go:276] 0 containers: []
	W0725 11:15:05.899390    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:05.899438    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:05.909784    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:05.909798    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:05.909806    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:05.929577    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:05.929588    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:05.944840    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:05.944853    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:05.956999    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:05.957013    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:05.982997    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:05.983009    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:05.999736    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:05.999750    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:06.022856    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:06.022869    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:06.061606    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:06.061621    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:06.073819    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:06.073833    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:06.086622    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:06.086631    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:06.112124    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:06.112131    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:06.124295    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:06.124309    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:06.158717    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:06.158728    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:08.665026    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:09.652264    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:09.652490    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:09.674413    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:09.674500    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:09.686860    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:09.686934    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:09.698181    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:09.698249    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:09.712438    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:09.712516    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:09.722526    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:09.722600    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:09.733080    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:09.733150    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:09.743987    4843 logs.go:276] 0 containers: []
	W0725 11:15:09.743999    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:09.744055    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:09.754424    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:09.754451    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:09.754456    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:09.766669    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:09.766702    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:09.802725    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:09.802737    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:09.818362    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:09.818374    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:13.666400    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:13.666580    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:13.680058    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:13.680136    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:13.691076    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:13.691149    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:13.702930    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:13.702993    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:13.713589    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:13.713655    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:13.724448    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:13.724513    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:13.735029    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:13.735093    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:13.746016    4677 logs.go:276] 0 containers: []
	W0725 11:15:13.746027    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:13.746082    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:13.756361    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:13.756375    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:13.756380    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:13.761435    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:13.761441    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:13.795823    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:13.795837    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:13.810006    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:13.810014    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:13.824962    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:13.824973    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:13.849561    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:13.849569    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:13.860591    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:13.860602    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:13.893931    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:13.893941    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:13.917364    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:13.917374    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:13.936038    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:13.936048    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:13.947740    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:13.947749    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:13.965766    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:13.965777    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:13.977533    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:13.977542    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:09.832811    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:09.832823    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:09.848649    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:09.848660    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:09.865561    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:09.865571    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:09.877253    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:09.877263    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:09.889372    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:09.889382    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:09.901380    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:09.901390    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:09.916093    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:09.916102    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:09.927427    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:09.927440    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:09.931632    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:09.931641    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:09.956078    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:09.956087    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:09.993640    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:09.993650    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:10.007831    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:10.007841    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:10.044740    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:10.044753    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:12.558129    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:16.494909    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:17.558584    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:17.558836    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:17.587378    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:17.587504    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:17.606000    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:17.606100    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:17.629425    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:17.629503    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:17.640398    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:17.640465    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:17.650538    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:17.650610    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:17.661433    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:17.661498    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:17.671546    4843 logs.go:276] 0 containers: []
	W0725 11:15:17.671556    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:17.671609    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:17.682135    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:17.682152    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:17.682156    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:17.696849    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:17.696858    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:17.708474    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:17.708486    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:17.720597    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:17.720609    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:17.757494    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:17.757508    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:17.770670    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:17.770682    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:17.782568    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:17.782579    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:17.820494    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:17.820501    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:17.857067    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:17.857077    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:17.868481    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:17.868492    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:17.881540    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:17.881551    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:17.902074    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:17.902090    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:17.924488    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:17.924499    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:17.942402    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:17.942413    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:17.956074    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:17.956085    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:17.971512    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:17.971526    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:17.976043    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:17.976052    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:21.495360    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:21.495731    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:21.531214    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:21.531326    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:21.548457    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:21.548537    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:21.562305    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:21.562384    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:21.573885    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:21.573949    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:21.584766    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:21.584832    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:21.600820    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:21.600891    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:21.615117    4677 logs.go:276] 0 containers: []
	W0725 11:15:21.615128    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:21.615178    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:21.625839    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:21.625854    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:21.625859    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:21.642976    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:21.642990    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:21.656128    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:21.656141    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:21.671404    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:21.671414    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:21.684779    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:21.684792    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:21.708543    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:21.708551    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:21.742330    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:21.742339    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:21.746688    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:21.746695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:21.758100    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:21.758111    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:21.769886    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:21.769896    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:21.787150    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:21.787161    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:21.798718    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:21.798734    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:21.880647    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:21.880660    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:24.396373    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:20.491105    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:29.398218    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:29.398401    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:29.415802    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:29.415895    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:29.428877    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:29.428952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:29.439954    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:29.440026    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:29.456920    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:29.456986    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:29.467765    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:29.467843    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:29.478750    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:29.478818    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:29.493069    4677 logs.go:276] 0 containers: []
	W0725 11:15:29.493079    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:29.493130    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:29.503481    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:29.503497    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:29.503502    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:25.493005    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:25.493363    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:25.539204    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:25.539347    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:25.558830    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:25.558925    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:25.573387    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:25.573458    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:25.587818    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:25.587895    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:25.598510    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:25.598572    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:25.609644    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:25.609712    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:25.621518    4843 logs.go:276] 0 containers: []
	W0725 11:15:25.621534    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:25.621589    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:25.632059    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:25.632075    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:25.632081    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:25.646706    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:25.646718    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:25.665810    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:25.665824    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:25.677589    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:25.677603    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:25.693026    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:25.693037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:25.735472    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:25.735483    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:25.747784    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:25.747798    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:25.759126    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:25.759140    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:25.797982    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:25.797995    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:25.814710    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:25.814722    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:25.828449    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:25.828460    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:25.846396    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:25.846409    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:25.858027    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:25.858037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:25.876453    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:25.876463    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:25.899371    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:25.899381    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:25.911114    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:25.911124    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:25.915894    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:25.915901    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:28.452431    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:29.520850    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:29.520864    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:29.532514    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:29.532529    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:29.543950    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:29.543961    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:29.548464    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:29.548470    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:29.583956    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:29.583966    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:29.604065    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:29.604076    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:29.615921    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:29.615932    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:29.631316    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:29.631341    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:29.666684    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:29.666695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:29.680866    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:29.680878    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:29.692776    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:29.692787    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:29.704566    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:29.704576    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:32.230870    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:33.454876    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:33.455060    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:33.479550    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:33.479655    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:33.495136    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:33.495216    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:33.507910    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:33.507979    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:33.523368    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:33.523431    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:33.534015    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:33.534088    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:33.545235    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:33.545307    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:33.554993    4843 logs.go:276] 0 containers: []
	W0725 11:15:33.555005    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:33.555061    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:33.565187    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:33.565208    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:33.565213    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:33.581878    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:33.581890    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:33.592802    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:33.592815    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:33.610367    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:33.610377    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:33.625543    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:33.625553    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:33.637007    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:33.637019    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:33.661002    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:33.661012    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:33.675318    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:33.675329    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:33.713677    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:33.713692    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:33.728139    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:33.728150    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:33.740155    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:33.740169    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:33.744281    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:33.744288    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:33.758261    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:33.758270    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:33.769640    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:33.769654    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:33.808487    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:33.808495    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:33.842884    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:33.842897    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:33.854789    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:33.854798    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:37.232998    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:37.233214    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:37.257649    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:37.257763    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:37.274493    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:37.274575    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:37.289547    4677 logs.go:276] 2 containers: [f09e78a809a6 1b057df70f63]
	I0725 11:15:37.289617    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:37.300733    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:37.300804    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:37.311168    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:37.311239    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:37.321751    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:37.321819    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:37.332697    4677 logs.go:276] 0 containers: []
	W0725 11:15:37.332708    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:37.332765    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:37.342940    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:37.342956    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:37.342962    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:37.356956    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:37.356966    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:37.373524    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:37.373537    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:37.385238    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:37.385249    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:37.396466    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:37.396479    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:37.411170    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:37.411181    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:37.422624    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:37.422633    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:37.454811    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:37.454818    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:37.459288    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:37.459296    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:37.499282    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:37.499296    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:37.510732    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:37.510743    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:37.530828    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:37.530841    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:37.542881    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:37.542891    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:36.367684    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:40.069684    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:41.369805    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:41.369965    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:41.384684    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:41.384766    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:41.400390    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:41.400448    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:41.410793    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:41.410853    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:41.421021    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:41.421086    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:41.431048    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:41.431105    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:41.442113    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:41.442170    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:41.452699    4843 logs.go:276] 0 containers: []
	W0725 11:15:41.452709    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:41.452755    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:41.464985    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:41.465000    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:41.465006    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:41.476676    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:41.476690    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:41.487415    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:41.487426    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:41.499767    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:41.499777    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:41.536370    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:41.536381    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:41.547667    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:41.547679    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:41.559850    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:41.559860    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:41.577985    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:41.577995    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:41.595028    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:41.595037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:41.609481    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:41.609491    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:41.647766    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:41.647777    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:41.662035    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:41.662045    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:41.673349    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:41.673360    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:41.677488    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:41.677494    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:41.713272    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:41.713283    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:41.736362    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:41.736373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:41.753975    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:41.753988    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:44.272581    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:45.071723    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:45.071938    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:45.101448    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:45.101564    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:45.117514    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:45.117607    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:45.132178    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:15:45.132244    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:45.146488    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:45.146564    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:45.156902    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:45.156971    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:45.167651    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:45.167717    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:45.177529    4677 logs.go:276] 0 containers: []
	W0725 11:15:45.177542    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:45.177598    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:45.192456    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:45.192475    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:45.192481    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:45.196969    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:15:45.196976    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:15:45.208129    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:45.208142    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:45.222282    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:45.222297    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:45.234302    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:45.234314    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:45.246111    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:45.246122    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:45.264317    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:45.264327    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:45.288766    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:45.288775    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:45.322573    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:45.322582    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:45.359139    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:15:45.359149    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:15:45.370257    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:45.370268    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:45.382300    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:45.382310    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:45.397337    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:45.397348    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:45.411387    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:45.411398    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:45.429944    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:45.429957    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:47.947810    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:49.273710    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:49.273960    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:49.303639    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:49.303748    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:49.322579    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:49.322669    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:49.338525    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:49.338603    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:49.350776    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:49.350854    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:49.362129    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:49.362198    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:49.373505    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:49.373566    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:49.384079    4843 logs.go:276] 0 containers: []
	W0725 11:15:49.384090    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:49.384147    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:49.394622    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:49.394644    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:49.394649    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:49.406223    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:49.406234    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:49.452952    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:49.452964    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:49.468466    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:49.468481    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:49.483795    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:49.483805    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:49.495229    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:49.495240    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:49.519543    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:49.519558    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:49.524332    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:49.524340    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:49.538997    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:49.539008    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:49.551270    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:49.551284    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:49.563076    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:49.563087    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:49.578408    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:49.578419    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:49.596911    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:49.596920    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:49.610800    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:49.610810    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:49.622853    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:49.622864    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:49.640917    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:49.640927    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:49.678482    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:49.678489    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:52.950404    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:52.950553    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:52.971023    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:15:52.971121    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:52.985873    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:15:52.985937    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:52.997727    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:15:52.997801    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:53.008624    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:15:53.008692    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:53.019048    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:15:53.019112    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:53.029773    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:15:53.029836    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:53.040301    4677 logs.go:276] 0 containers: []
	W0725 11:15:53.040310    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:53.040362    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:53.050612    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:15:53.050630    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:15:53.050635    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:15:53.062150    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:15:53.062164    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:15:53.073738    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:53.073749    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:53.108521    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:15:53.108534    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:15:53.122685    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:15:53.122698    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:15:53.134500    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:15:53.134511    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:15:53.159338    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:53.159352    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:53.163755    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:15:53.163762    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:15:53.185090    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:53.185102    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:53.211082    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:15:53.211093    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:15:53.225850    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:15:53.225861    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:15:53.245300    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:15:53.245314    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:15:53.260952    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:15:53.260963    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:15:53.272675    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:15:53.272686    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:53.284284    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:53.284297    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:52.215401    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:55.818854    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:57.217464    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:57.217661    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:57.236232    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:57.236323    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:57.252091    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:57.252166    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:57.264366    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:57.264436    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:57.274967    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:57.275035    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:57.286024    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:57.286086    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:57.296942    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:57.297007    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:57.306783    4843 logs.go:276] 0 containers: []
	W0725 11:15:57.306795    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:57.306848    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:57.317727    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:57.317747    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:57.317754    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:57.332650    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:57.332662    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:57.344756    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:57.344766    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:57.356041    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:57.356051    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:57.390801    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:57.390812    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:57.427387    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:57.427398    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:57.441210    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:57.441222    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:57.452702    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:57.452712    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:57.464296    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:57.464306    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:57.468649    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:57.468655    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:57.487435    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:57.487444    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:57.506785    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:57.506795    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:57.523502    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:57.523512    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:57.561831    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:57.561841    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:57.576747    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:57.576757    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:57.587755    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:57.587764    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:57.610132    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:57.610140    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:00.821042    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:00.821215    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:00.840150    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:00.840242    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:00.853504    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:00.853567    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:00.865904    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:00.865974    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:00.882983    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:00.883060    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:00.895962    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:00.896032    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:00.907194    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:00.907260    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:00.917680    4677 logs.go:276] 0 containers: []
	W0725 11:16:00.917692    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:00.917747    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:00.928503    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:00.928521    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:00.928527    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:00.963772    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:00.963782    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:00.977749    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:00.977761    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:00.991416    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:00.991427    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:01.003936    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:01.003946    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:01.015533    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:01.015548    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:01.027644    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:01.027657    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:01.042333    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:01.042345    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:01.067200    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:01.067208    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:01.101706    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:01.101717    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:01.120414    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:01.120425    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:01.125157    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:01.125166    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:01.136832    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:01.136842    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:01.149962    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:01.149972    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:01.161377    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:01.161389    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:03.675212    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:00.123962    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:08.677459    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:08.677695    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:08.704315    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:08.704439    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:08.726909    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:08.726996    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:08.739687    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:08.739768    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:08.750733    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:08.750799    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:08.761003    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:08.761065    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:08.771163    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:08.771231    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:08.784459    4677 logs.go:276] 0 containers: []
	W0725 11:16:08.784469    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:08.784530    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:08.795397    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:08.795414    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:08.795419    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:08.806954    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:08.806965    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:08.831515    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:08.831524    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:08.843081    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:08.843094    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:08.876346    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:08.876354    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:08.889506    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:08.889517    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:08.901785    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:08.901799    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:08.906722    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:08.906728    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:08.927155    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:08.927166    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:08.938496    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:08.938506    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:08.950144    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:08.950155    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:08.967181    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:08.967191    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:09.002050    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:09.002063    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:09.016642    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:09.016654    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:09.030772    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:09.030785    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:05.126151    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:05.126270    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:05.141827    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:05.141905    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:05.154266    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:05.154341    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:05.165100    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:05.165170    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:05.180638    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:05.180710    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:05.191465    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:05.191541    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:05.202197    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:05.202267    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:05.212556    4843 logs.go:276] 0 containers: []
	W0725 11:16:05.212566    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:05.212627    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:05.223020    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:05.223039    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:05.223044    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:05.234555    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:05.234570    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:05.246089    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:05.246099    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:05.284640    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:05.284661    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:05.289556    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:05.289563    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:05.303816    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:05.303829    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:05.315922    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:05.315940    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:05.331852    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:05.331861    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:05.351404    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:05.351414    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:05.366770    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:05.366780    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:05.390394    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:05.390402    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:05.424263    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:05.424272    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:05.437852    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:05.437866    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:05.453521    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:05.453532    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:05.465865    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:05.465877    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:05.503612    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:05.503624    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:05.515499    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:05.515514    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:08.029561    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:11.543914    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:13.031781    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:13.031920    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:13.043764    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:13.043837    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:13.054208    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:13.054280    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:13.064904    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:13.064974    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:13.076298    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:13.076368    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:13.090702    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:13.090776    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:13.101754    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:13.101822    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:13.112532    4843 logs.go:276] 0 containers: []
	W0725 11:16:13.112542    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:13.112596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:13.123560    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:13.123577    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:13.123583    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:13.147108    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:13.147119    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:13.159309    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:13.159319    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:13.171398    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:13.171407    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:13.185030    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:13.185041    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:13.200351    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:13.200361    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:13.216218    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:13.216231    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:13.230221    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:13.230234    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:13.250275    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:13.250285    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:13.272035    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:13.272048    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:13.284691    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:13.284702    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:13.295603    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:13.295612    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:13.307504    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:13.307517    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:13.345586    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:13.345595    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:13.350280    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:13.350290    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:13.385278    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:13.385291    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:13.423138    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:13.423149    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:16.545551    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:16.545775    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:16.563266    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:16.563352    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:16.576745    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:16.576819    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:16.588329    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:16.588394    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:16.599036    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:16.599102    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:16.609397    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:16.609464    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:16.620265    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:16.620332    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:16.630600    4677 logs.go:276] 0 containers: []
	W0725 11:16:16.630612    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:16.630669    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:16.640651    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:16.640668    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:16.640674    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:16.674933    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:16.674948    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:16.686325    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:16.686337    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:16.703807    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:16.703818    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:16.715354    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:16.715365    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:16.747625    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:16.747633    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:16.761775    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:16.761785    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:16.773713    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:16.773724    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:16.790360    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:16.790372    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:16.805459    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:16.805468    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:16.817367    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:16.817378    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:16.829968    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:16.829980    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:16.835050    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:16.835057    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:16.849742    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:16.849754    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:16.861179    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:16.861192    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:19.386721    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:15.941557    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:24.389139    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:24.389288    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:24.402869    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:24.402943    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:24.413380    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:24.413449    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:24.423659    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:24.423724    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:24.433884    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:24.433952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:24.447523    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:24.447593    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:24.458083    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:24.458142    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:24.468381    4677 logs.go:276] 0 containers: []
	W0725 11:16:24.468393    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:24.468442    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:24.479269    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:24.479290    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:24.479295    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:24.493047    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:24.493056    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:24.505220    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:24.505230    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:20.944040    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:20.944192    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:20.959959    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:20.960031    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:20.972198    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:20.972276    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:20.983250    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:20.983325    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:20.994373    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:20.994440    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:21.011683    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:21.011752    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:21.024790    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:21.024866    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:21.035525    4843 logs.go:276] 0 containers: []
	W0725 11:16:21.035539    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:21.035596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:21.045908    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:21.045929    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:21.045935    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:21.060076    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:21.060089    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:21.071300    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:21.071310    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:21.082911    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:21.082924    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:21.098289    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:21.098300    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:21.109602    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:21.109616    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:21.148570    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:21.148582    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:21.163393    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:21.163402    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:21.174537    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:21.174547    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:21.213234    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:21.213250    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:21.217709    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:21.217716    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:21.229658    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:21.229681    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:21.252992    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:21.253000    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:21.288600    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:21.288609    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:21.303187    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:21.303199    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:21.318530    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:21.318541    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:21.332616    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:21.332627    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:23.851968    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:24.523409    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:24.523421    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:24.557739    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:24.557748    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:24.570336    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:24.570347    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:24.582167    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:24.582179    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:24.597010    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:24.597021    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:24.618207    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:24.618218    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:24.630180    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:24.630193    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:24.635043    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:24.635050    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:24.655758    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:24.655769    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:24.667030    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:24.667040    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:24.690617    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:24.690627    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:24.701744    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:24.701754    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:27.240173    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:28.854337    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:28.854697    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:28.885634    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:28.885755    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:28.904107    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:28.904190    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:28.917371    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:28.917443    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:28.930429    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:28.930497    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:28.940805    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:28.940867    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:28.951338    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:28.951407    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:28.961232    4843 logs.go:276] 0 containers: []
	W0725 11:16:28.961242    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:28.961292    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:28.971849    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:28.971867    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:28.971872    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:28.985545    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:28.985556    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:29.010055    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:29.010064    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:29.014397    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:29.014403    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:29.031309    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:29.031319    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:29.045575    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:29.045585    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:29.057277    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:29.057289    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:29.094656    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:29.094675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:29.132320    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:29.132334    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:29.149527    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:29.149540    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:29.161306    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:29.161331    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:29.172416    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:29.172428    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:29.210767    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:29.210778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:29.227465    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:29.227476    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:29.243339    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:29.243349    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:29.254597    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:29.254613    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:29.266240    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:29.266250    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:32.241650    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:32.241961    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:32.260082    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:32.260184    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:32.273558    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:32.273622    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:32.289020    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:32.289097    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:32.300286    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:32.300354    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:32.315631    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:32.315697    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:32.326243    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:32.326310    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:32.336583    4677 logs.go:276] 0 containers: []
	W0725 11:16:32.336595    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:32.336653    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:32.347172    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:32.347188    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:32.347194    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:32.382080    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:32.382094    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:32.406391    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:32.406403    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:32.411087    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:32.411096    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:32.424348    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:32.424359    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:32.443844    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:32.443854    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:32.458496    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:32.458505    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:32.469919    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:32.469934    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:32.481796    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:32.481808    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:32.516926    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:32.516935    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:32.528901    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:32.528914    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:32.540725    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:32.540739    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:32.556910    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:32.556919    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:32.575373    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:32.575382    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:32.592662    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:32.592675    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:31.779777    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:35.106749    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:36.782227    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:36.782467    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:36.802661    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:36.802766    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:36.816396    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:36.816475    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:36.832724    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:36.832794    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:36.843101    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:36.843177    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:36.855303    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:36.855368    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:36.868810    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:36.868884    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:36.879756    4843 logs.go:276] 0 containers: []
	W0725 11:16:36.879768    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:36.879817    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:36.890635    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:36.890652    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:36.890658    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:36.925641    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:36.925653    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:36.937231    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:36.937242    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:36.953035    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:36.953048    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:36.964937    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:36.964948    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:36.978379    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:36.978390    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:36.994181    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:36.994192    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:37.033324    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:37.033336    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:37.070990    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:37.071004    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:37.085685    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:37.085694    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:37.103339    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:37.103354    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:37.124682    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:37.124693    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:37.151055    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:37.151077    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:37.156015    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:37.156030    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:37.178549    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:37.178564    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:37.195331    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:37.195346    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:37.206841    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:37.206852    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:39.721771    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:40.108892    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:40.109123    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:40.136621    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:40.136745    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:40.155145    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:40.155222    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:40.169201    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:40.169275    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:40.180503    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:40.180577    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:40.191388    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:40.191457    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:40.202363    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:40.202428    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:40.215256    4677 logs.go:276] 0 containers: []
	W0725 11:16:40.215266    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:40.215314    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:40.227415    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:40.227434    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:40.227439    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:40.246132    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:40.246142    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:40.258417    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:40.258429    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:40.271312    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:40.271322    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:40.284000    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:40.284013    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:40.319698    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:40.319708    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:40.331562    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:40.331572    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:40.356093    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:40.356104    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:40.367504    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:40.367515    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:40.371805    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:40.371814    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:40.391309    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:40.391319    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:40.404786    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:40.404796    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:40.419817    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:40.419826    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:40.431401    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:40.431410    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:40.442695    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:40.442708    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:42.979686    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:44.723917    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:44.724074    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:44.740370    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:44.740437    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:44.751384    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:44.751452    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:44.761699    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:44.761765    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:44.775183    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:44.775257    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:44.785788    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:44.785859    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:44.796703    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:44.796773    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:44.807612    4843 logs.go:276] 0 containers: []
	W0725 11:16:44.807626    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:44.807688    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:47.982280    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:47.982594    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:48.015534    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:48.015658    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:48.032830    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:48.032926    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:48.046713    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:48.046789    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:48.058859    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:48.058926    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:48.069736    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:48.069815    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:48.081214    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:48.081281    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:48.091909    4677 logs.go:276] 0 containers: []
	W0725 11:16:48.091921    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:48.091984    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:48.102469    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:48.102485    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:48.102491    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:48.135286    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:48.135294    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:48.171373    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:48.171385    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:48.183044    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:48.183059    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:48.195677    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:48.195688    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:48.211090    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:48.211101    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:48.222591    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:48.222604    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:48.234406    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:48.234418    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:48.246506    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:48.246519    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:48.258830    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:48.258844    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:48.270641    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:48.270652    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:48.295412    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:48.295424    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:48.299854    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:48.299863    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:48.314528    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:48.314538    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:48.328921    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:48.328933    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:44.823261    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:44.823278    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:44.823283    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:44.834650    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:44.834664    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:44.846194    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:44.846206    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:44.884618    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:44.884629    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:44.931753    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:44.931765    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:44.948457    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:44.948470    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:44.963753    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:44.963763    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:44.968292    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:44.968299    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:44.981854    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:44.981864    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:44.998152    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:44.998162    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:45.010068    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:45.010080    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:45.021932    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:45.021943    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:45.060142    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:45.060150    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:45.094917    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:45.094927    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:45.110436    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:45.110447    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:45.129284    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:45.129294    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:45.152670    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:45.152679    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:47.667542    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:52.669276    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:52.669337    4843 kubeadm.go:597] duration metric: took 4m4.208924833s to restartPrimaryControlPlane
	W0725 11:16:52.669395    4843 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 11:16:52.669420    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 11:16:53.696263    4843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026864959s)
	I0725 11:16:53.696334    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:16:53.701267    4843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:16:53.704215    4843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:16:53.706756    4843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:16:53.706763    4843 kubeadm.go:157] found existing configuration files:
	
	I0725 11:16:53.706786    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0725 11:16:53.709109    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:16:53.709129    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:16:53.711796    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0725 11:16:53.714187    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:16:53.714207    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:16:53.717093    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0725 11:16:53.720122    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:16:53.720140    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:16:53.722688    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0725 11:16:53.725991    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:16:53.726012    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:16:53.729060    4843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 11:16:53.746444    4843 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0725 11:16:53.746473    4843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 11:16:53.798985    4843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 11:16:53.799046    4843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 11:16:53.799095    4843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 11:16:53.847452    4843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 11:16:53.852617    4843 out.go:204]   - Generating certificates and keys ...
	I0725 11:16:53.852649    4843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 11:16:53.852678    4843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 11:16:53.852728    4843 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 11:16:53.852761    4843 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 11:16:53.852797    4843 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 11:16:53.852823    4843 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 11:16:53.852850    4843 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 11:16:53.852878    4843 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 11:16:53.852914    4843 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 11:16:53.852949    4843 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 11:16:53.852966    4843 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 11:16:53.852994    4843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 11:16:53.950168    4843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 11:16:54.094803    4843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 11:16:54.187130    4843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 11:16:54.238628    4843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 11:16:54.269654    4843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 11:16:54.270024    4843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 11:16:54.270056    4843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 11:16:54.351783    4843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 11:16:50.849443    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:54.356044    4843 out.go:204]   - Booting up control plane ...
	I0725 11:16:54.356093    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 11:16:54.356131    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 11:16:54.356170    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 11:16:54.356213    4843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 11:16:54.356320    4843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 11:16:55.851576    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:55.851674    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:55.863307    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:16:55.863383    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:55.874283    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:16:55.874364    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:55.885808    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:16:55.885881    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:55.896821    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:16:55.896889    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:55.907552    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:16:55.907619    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:55.918450    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:16:55.918522    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:55.929274    4677 logs.go:276] 0 containers: []
	W0725 11:16:55.929285    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:55.929345    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:55.939992    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:16:55.940011    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:55.940016    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:55.965101    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:16:55.965108    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:55.976959    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:16:55.976972    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:16:55.989453    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:16:55.989464    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:16:56.007393    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:16:56.007405    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:16:56.028716    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:56.028726    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:56.033757    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:16:56.033765    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:16:56.048755    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:16:56.048775    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:16:56.061615    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:56.061625    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:56.096257    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:16:56.096266    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:16:56.109267    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:16:56.109279    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:16:56.121684    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:16:56.121695    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:16:56.140533    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:56.140547    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:56.178010    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:16:56.178024    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:16:56.193055    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:16:56.193071    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:16:58.707424    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:58.857439    4843 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501506 seconds
	I0725 11:16:58.857540    4843 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 11:16:58.863323    4843 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 11:16:59.372241    4843 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 11:16:59.372354    4843 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-820000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 11:16:59.875850    4843 kubeadm.go:310] [bootstrap-token] Using token: m6opb0.4rgq96igybzj768v
	I0725 11:16:59.881421    4843 out.go:204]   - Configuring RBAC rules ...
	I0725 11:16:59.881487    4843 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 11:16:59.881538    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 11:16:59.884983    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 11:16:59.885951    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 11:16:59.886854    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 11:16:59.887881    4843 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 11:16:59.891026    4843 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 11:17:00.056608    4843 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 11:17:00.283759    4843 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 11:17:00.283771    4843 kubeadm.go:310] 
	I0725 11:17:00.283799    4843 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 11:17:00.283801    4843 kubeadm.go:310] 
	I0725 11:17:00.283885    4843 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 11:17:00.283890    4843 kubeadm.go:310] 
	I0725 11:17:00.283902    4843 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 11:17:00.283929    4843 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 11:17:00.283963    4843 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 11:17:00.283968    4843 kubeadm.go:310] 
	I0725 11:17:00.284008    4843 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 11:17:00.284018    4843 kubeadm.go:310] 
	I0725 11:17:00.284052    4843 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 11:17:00.284060    4843 kubeadm.go:310] 
	I0725 11:17:00.284098    4843 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 11:17:00.284160    4843 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 11:17:00.284200    4843 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 11:17:00.284203    4843 kubeadm.go:310] 
	I0725 11:17:00.284311    4843 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 11:17:00.284461    4843 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 11:17:00.284468    4843 kubeadm.go:310] 
	I0725 11:17:00.284593    4843 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m6opb0.4rgq96igybzj768v \
	I0725 11:17:00.284651    4843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 \
	I0725 11:17:00.284662    4843 kubeadm.go:310] 	--control-plane 
	I0725 11:17:00.284665    4843 kubeadm.go:310] 
	I0725 11:17:00.284703    4843 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 11:17:00.284706    4843 kubeadm.go:310] 
	I0725 11:17:00.284747    4843 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m6opb0.4rgq96igybzj768v \
	I0725 11:17:00.284799    4843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 
	I0725 11:17:00.285019    4843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 11:17:00.285244    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:17:00.285268    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:17:00.288319    4843 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 11:17:00.295389    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 11:17:00.298306    4843 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 11:17:00.302914    4843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 11:17:00.302959    4843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 11:17:00.302960    4843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-820000 minikube.k8s.io/updated_at=2024_07_25T11_17_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=stopped-upgrade-820000 minikube.k8s.io/primary=true
	I0725 11:17:00.306109    4843 ops.go:34] apiserver oom_adj: -16
	I0725 11:17:00.353044    4843 kubeadm.go:1113] duration metric: took 50.123042ms to wait for elevateKubeSystemPrivileges
	I0725 11:17:00.353057    4843 kubeadm.go:394] duration metric: took 4m11.908477125s to StartCluster
	I0725 11:17:00.353067    4843 settings.go:142] acquiring lock: {Name:mk9c0f6a74d3ffd78a971cee1d6827e5c0e0b5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:17:00.353152    4843 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:17:00.353541    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:17:00.353746    4843 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:17:00.353760    4843 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 11:17:00.353791    4843 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-820000"
	I0725 11:17:00.353805    4843 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-820000"
	W0725 11:17:00.353809    4843 addons.go:243] addon storage-provisioner should already be in state true
	I0725 11:17:00.353813    4843 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-820000"
	I0725 11:17:00.353820    4843 host.go:66] Checking if "stopped-upgrade-820000" exists ...
	I0725 11:17:00.353824    4843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-820000"
	I0725 11:17:00.353833    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:17:00.354993    4843 kapi.go:59] client config for stopped-upgrade-820000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106493fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:17:00.355105    4843 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-820000"
	W0725 11:17:00.355109    4843 addons.go:243] addon default-storageclass should already be in state true
	I0725 11:17:00.355116    4843 host.go:66] Checking if "stopped-upgrade-820000" exists ...
	I0725 11:17:00.358161    4843 out.go:177] * Verifying Kubernetes components...
	I0725 11:17:00.358488    4843 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 11:17:00.362181    4843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 11:17:00.362190    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:17:00.368034    4843 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:17:03.709499    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:03.709676    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:03.722449    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:03.722519    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:03.733878    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:03.733946    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:03.744930    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:03.745002    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:03.755885    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:03.755952    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:03.766463    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:03.766527    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:03.777197    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:03.777260    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:03.787142    4677 logs.go:276] 0 containers: []
	W0725 11:17:03.787153    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:03.787206    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:03.797555    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:03.797572    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:03.797577    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:03.830092    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:03.830103    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:03.841649    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:03.841661    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:03.852562    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:03.852576    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:03.864461    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:03.864475    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:03.876245    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:03.876257    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:03.891005    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:03.891018    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:03.914461    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:03.914469    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:03.925954    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:03.925964    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:03.930318    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:03.930325    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:03.944668    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:03.944677    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:03.958395    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:03.958406    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:03.993619    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:03.993630    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:04.006828    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:04.006838    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:04.018700    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:04.018712    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:00.374062    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:17:00.377064    4843 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:17:00.377071    4843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 11:17:00.377079    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:17:00.467255    4843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:17:00.473954    4843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 11:17:00.475409    4843 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:17:00.475435    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:17:00.536444    4843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:17:00.807733    4843 api_server.go:72] duration metric: took 453.988625ms to wait for apiserver process to appear ...
	I0725 11:17:00.807747    4843 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:17:00.807756    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:06.538234    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:05.809519    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:05.809571    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:11.540329    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:11.540459    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:11.551786    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:11.551858    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:11.563357    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:11.563433    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:11.574237    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:11.574311    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:11.584714    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:11.584771    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:11.596330    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:11.596391    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:11.608063    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:11.608129    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:11.618236    4677 logs.go:276] 0 containers: []
	W0725 11:17:11.618248    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:11.618304    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:11.629133    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:11.629149    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:11.629155    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:11.633958    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:11.633965    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:11.653482    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:11.653496    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:11.671523    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:11.671532    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:11.696638    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:11.696650    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:11.730165    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:11.730177    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:11.741511    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:11.741521    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:11.752992    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:11.753005    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:11.767791    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:11.767804    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:11.780128    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:11.780138    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:11.793639    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:11.793650    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:11.830472    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:11.830484    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:11.845085    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:11.845096    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:11.857124    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:11.857139    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:11.868837    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:11.868848    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:14.383019    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:10.809692    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:10.809710    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:19.385057    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:19.385235    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:17:19.403083    4677 logs.go:276] 1 containers: [618446cabe76]
	I0725 11:17:19.403159    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:17:19.414685    4677 logs.go:276] 1 containers: [b579aafdbaaa]
	I0725 11:17:19.414752    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:17:19.425505    4677 logs.go:276] 4 containers: [124f9697a91c b0d5f89110b7 f09e78a809a6 1b057df70f63]
	I0725 11:17:19.425580    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:17:19.436698    4677 logs.go:276] 1 containers: [a7e36ee32739]
	I0725 11:17:19.436767    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:17:19.448008    4677 logs.go:276] 1 containers: [812bdfe73d04]
	I0725 11:17:19.448066    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:17:19.459265    4677 logs.go:276] 1 containers: [4fc39de1f40f]
	I0725 11:17:19.459338    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:17:19.469255    4677 logs.go:276] 0 containers: []
	W0725 11:17:19.469267    4677 logs.go:278] No container was found matching "kindnet"
	I0725 11:17:19.469319    4677 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:17:19.479607    4677 logs.go:276] 1 containers: [6a65d2a52fea]
	I0725 11:17:19.479624    4677 logs.go:123] Gathering logs for kube-proxy [812bdfe73d04] ...
	I0725 11:17:19.479629    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812bdfe73d04"
	I0725 11:17:19.491590    4677 logs.go:123] Gathering logs for Docker ...
	I0725 11:17:19.491602    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:17:19.514724    4677 logs.go:123] Gathering logs for kube-scheduler [a7e36ee32739] ...
	I0725 11:17:19.514732    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7e36ee32739"
	I0725 11:17:15.809872    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:15.809927    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:19.529721    4677 logs.go:123] Gathering logs for storage-provisioner [6a65d2a52fea] ...
	I0725 11:17:19.529731    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a65d2a52fea"
	I0725 11:17:19.547161    4677 logs.go:123] Gathering logs for coredns [1b057df70f63] ...
	I0725 11:17:19.547172    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1b057df70f63"
	I0725 11:17:19.558748    4677 logs.go:123] Gathering logs for kube-controller-manager [4fc39de1f40f] ...
	I0725 11:17:19.558760    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fc39de1f40f"
	I0725 11:17:19.576525    4677 logs.go:123] Gathering logs for container status ...
	I0725 11:17:19.576538    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:17:19.588722    4677 logs.go:123] Gathering logs for dmesg ...
	I0725 11:17:19.588733    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:17:19.593640    4677 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:17:19.593647    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:17:19.632160    4677 logs.go:123] Gathering logs for kube-apiserver [618446cabe76] ...
	I0725 11:17:19.632171    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 618446cabe76"
	I0725 11:17:19.647355    4677 logs.go:123] Gathering logs for etcd [b579aafdbaaa] ...
	I0725 11:17:19.647366    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b579aafdbaaa"
	I0725 11:17:19.661615    4677 logs.go:123] Gathering logs for coredns [f09e78a809a6] ...
	I0725 11:17:19.661625    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f09e78a809a6"
	I0725 11:17:19.673994    4677 logs.go:123] Gathering logs for kubelet ...
	I0725 11:17:19.674004    4677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:17:19.707526    4677 logs.go:123] Gathering logs for coredns [124f9697a91c] ...
	I0725 11:17:19.707534    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 124f9697a91c"
	I0725 11:17:19.724496    4677 logs.go:123] Gathering logs for coredns [b0d5f89110b7] ...
	I0725 11:17:19.724505    4677 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0d5f89110b7"
	I0725 11:17:22.238742    4677 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:20.810160    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:20.810209    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:27.241195    4677 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:27.245554    4677 out.go:177] 
	W0725 11:17:27.249684    4677 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0725 11:17:27.249692    4677 out.go:239] * 
	W0725 11:17:27.250334    4677 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:17:27.261620    4677 out.go:177] 
	I0725 11:17:25.810571    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:25.810609    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0725 11:17:30.809035    4843 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0725 11:17:30.811067    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:30.811086    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:30.813375    4843 out.go:177] * Enabled addons: storage-provisioner
	I0725 11:17:30.824243    4843 addons.go:510] duration metric: took 30.471482208s for enable addons: enabled=[storage-provisioner]
	I0725 11:17:35.811772    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:35.811823    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-07-25 18:08:37 UTC, ends at Thu 2024-07-25 18:17:43 UTC. --
	Jul 25 18:17:28 running-upgrade-159000 dockerd[3167]: time="2024-07-25T18:17:28.548309894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 25 18:17:28 running-upgrade-159000 dockerd[3167]: time="2024-07-25T18:17:28.548405598Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 25 18:17:28 running-upgrade-159000 dockerd[3167]: time="2024-07-25T18:17:28.548432389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 25 18:17:28 running-upgrade-159000 dockerd[3167]: time="2024-07-25T18:17:28.548496219Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/21872e5bfa84ebdc966c6529027087025bd76735d981bb6aefcb3795444146e5 pid=18843 runtime=io.containerd.runc.v2
	Jul 25 18:17:28 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:28Z" level=error msg="ContainerStats resp: {0x400083df00 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008ea900 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008218c0 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008ea300 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008ea680 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008eaac0 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008eac00 linux}"
	Jul 25 18:17:29 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:29Z" level=error msg="ContainerStats resp: {0x40008217c0 linux}"
	Jul 25 18:17:30 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 25 18:17:35 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 25 18:17:39 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:39Z" level=error msg="ContainerStats resp: {0x400083c040 linux}"
	Jul 25 18:17:39 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:39Z" level=error msg="ContainerStats resp: {0x400083cb00 linux}"
	Jul 25 18:17:40 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 25 18:17:40 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:40Z" level=error msg="ContainerStats resp: {0x40007f2ac0 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40006274c0 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40007f38c0 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40007f3ec0 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40004e4d40 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40004e5680 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40007e4900 linux}"
	Jul 25 18:17:41 running-upgrade-159000 cri-dockerd[3008]: time="2024-07-25T18:17:41Z" level=error msg="ContainerStats resp: {0x40007e4f40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	21872e5bfa84e       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   43f1f1ac7233b
	c76d682028bb7       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   b3220410ebc4e
	124f9697a91cf       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   43f1f1ac7233b
	b0d5f89110b71       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b3220410ebc4e
	812bdfe73d04e       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   459552e2fe038
	6a65d2a52feac       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   47cc57b4d6e4d
	618446cabe76c       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9c2f4191c595e
	b579aafdbaaa2       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e9124e2792000
	4fc39de1f40f1       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   6922e716f7217
	a7e36ee327396       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   19a59bf904c90
	
	
	==> coredns [124f9697a91c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:35246->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:49295->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:52672->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:48039->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:39876->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:49696->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:56922->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:50191->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:45772->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2134202470742120674.1331772754146831695. HINFO: read udp 10.244.0.3:55456->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [21872e5bfa84] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7272984858058724616.5357436366696226146. HINFO: read udp 10.244.0.3:39854->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7272984858058724616.5357436366696226146. HINFO: read udp 10.244.0.3:43860->10.0.2.3:53: i/o timeout
	
	
	==> coredns [b0d5f89110b7] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:44194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:37621->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:51833->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:52283->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:60900->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:56915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:54674->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:46377->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:35798->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8167543469334697890.7385747135209363441. HINFO: read udp 10.244.0.2:50578->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c76d682028bb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 890113922409096412.5829328685429919812. HINFO: read udp 10.244.0.2:43862->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 890113922409096412.5829328685429919812. HINFO: read udp 10.244.0.2:48988->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-159000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-159000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=running-upgrade-159000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T11_13_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:13:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-159000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:17:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:13:26 +0000   Thu, 25 Jul 2024 18:13:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:13:26 +0000   Thu, 25 Jul 2024 18:13:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:13:26 +0000   Thu, 25 Jul 2024 18:13:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:13:26 +0000   Thu, 25 Jul 2024 18:13:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-159000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 d84ab6af70ad4ddcad717694718af5c9
	  System UUID:                d84ab6af70ad4ddcad717694718af5c9
	  Boot ID:                    8559a975-0cab-4d16-bba6-079674eb2c7e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-fklwh                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-v2ms5                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-159000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-159000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-159000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-proxy-8kg4s                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-159000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-159000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-159000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-159000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-159000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-159000 event: Registered Node running-upgrade-159000 in Controller
	
	
	==> dmesg <==
	[  +2.171562] systemd-fstab-generator[884]: Ignoring "noauto" for root device
	[  +0.076389] systemd-fstab-generator[895]: Ignoring "noauto" for root device
	[  +0.064152] systemd-fstab-generator[906]: Ignoring "noauto" for root device
	[  +1.138940] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.080409] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +0.074888] systemd-fstab-generator[1067]: Ignoring "noauto" for root device
	[  +2.150774] systemd-fstab-generator[1297]: Ignoring "noauto" for root device
	[Jul25 18:09] systemd-fstab-generator[1842]: Ignoring "noauto" for root device
	[  +2.957598] systemd-fstab-generator[2203]: Ignoring "noauto" for root device
	[  +0.137644] systemd-fstab-generator[2236]: Ignoring "noauto" for root device
	[  +0.098526] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.094257] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +3.199767] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.192385] systemd-fstab-generator[2963]: Ignoring "noauto" for root device
	[  +0.065426] systemd-fstab-generator[2976]: Ignoring "noauto" for root device
	[  +0.080212] systemd-fstab-generator[2987]: Ignoring "noauto" for root device
	[  +0.081590] systemd-fstab-generator[3001]: Ignoring "noauto" for root device
	[  +2.366243] systemd-fstab-generator[3153]: Ignoring "noauto" for root device
	[  +1.863601] systemd-fstab-generator[3535]: Ignoring "noauto" for root device
	[  +1.074721] systemd-fstab-generator[3761]: Ignoring "noauto" for root device
	[ +20.086480] kauditd_printk_skb: 68 callbacks suppressed
	[Jul25 18:13] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.499619] systemd-fstab-generator[11863]: Ignoring "noauto" for root device
	[  +6.140303] systemd-fstab-generator[12484]: Ignoring "noauto" for root device
	[  +0.473430] systemd-fstab-generator[12618]: Ignoring "noauto" for root device
	
	
	==> etcd [b579aafdbaaa] <==
	{"level":"info","ts":"2024-07-25T18:13:21.291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-25T18:13:21.291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-25T18:13:21.292Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:13:21.292Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-25T18:13:21.292Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-25T18:13:21.292Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:13:21.292Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-25T18:13:22.258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-25T18:13:22.259Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:13:22.259Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-159000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:13:22.259Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:13:22.262Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-25T18:13:22.262Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:13:22.263Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:13:22.263Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:13:22.263Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:13:22.264Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:13:22.265Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:13:22.265Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:17:43 up 9 min,  0 users,  load average: 0.32, 0.32, 0.18
	Linux running-upgrade-159000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [618446cabe76] <==
	I0725 18:13:23.443868       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0725 18:13:23.452170       1 controller.go:611] quota admission added evaluator for: namespaces
	I0725 18:13:23.488504       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0725 18:13:23.488573       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0725 18:13:23.488596       1 cache.go:39] Caches are synced for autoregister controller
	I0725 18:13:23.488693       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0725 18:13:23.488965       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0725 18:13:24.230994       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 18:13:24.398588       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0725 18:13:24.403273       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0725 18:13:24.403309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0725 18:13:24.550212       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 18:13:24.560281       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0725 18:13:24.660719       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0725 18:13:24.662917       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0725 18:13:24.663310       1 controller.go:611] quota admission added evaluator for: endpoints
	I0725 18:13:24.664767       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 18:13:25.519373       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0725 18:13:26.276388       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0725 18:13:26.280584       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0725 18:13:26.286467       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0725 18:13:26.338956       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 18:13:40.131109       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0725 18:13:40.280189       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0725 18:13:41.722866       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [4fc39de1f40f] <==
	I0725 18:13:39.383594       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 18:13:39.385908       1 shared_informer.go:262] Caches are synced for service account
	I0725 18:13:39.395790       1 shared_informer.go:262] Caches are synced for GC
	I0725 18:13:39.396916       1 shared_informer.go:262] Caches are synced for TTL
	I0725 18:13:39.424196       1 shared_informer.go:262] Caches are synced for disruption
	I0725 18:13:39.424203       1 disruption.go:371] Sending events to api server.
	I0725 18:13:39.429187       1 shared_informer.go:262] Caches are synced for taint
	I0725 18:13:39.429253       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0725 18:13:39.429283       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-159000. Assuming now as a timestamp.
	I0725 18:13:39.429337       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0725 18:13:39.429448       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0725 18:13:39.429935       1 event.go:294] "Event occurred" object="running-upgrade-159000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-159000 event: Registered Node running-upgrade-159000 in Controller"
	I0725 18:13:39.431005       1 shared_informer.go:262] Caches are synced for daemon sets
	I0725 18:13:39.478424       1 shared_informer.go:262] Caches are synced for resource quota
	I0725 18:13:39.480677       1 shared_informer.go:262] Caches are synced for expand
	I0725 18:13:39.529025       1 shared_informer.go:262] Caches are synced for PV protection
	I0725 18:13:39.530170       1 shared_informer.go:262] Caches are synced for attach detach
	I0725 18:13:39.578926       1 shared_informer.go:262] Caches are synced for persistent volume
	I0725 18:13:39.994801       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 18:13:40.082520       1 shared_informer.go:262] Caches are synced for garbage collector
	I0725 18:13:40.082533       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 18:13:40.132709       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0725 18:13:40.282904       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8kg4s"
	I0725 18:13:40.381599       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-v2ms5"
	I0725 18:13:40.386122       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fklwh"
	
	
	==> kube-proxy [812bdfe73d04] <==
	I0725 18:13:41.712374       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0725 18:13:41.712402       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0725 18:13:41.712412       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0725 18:13:41.721174       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0725 18:13:41.721181       1 server_others.go:206] "Using iptables Proxier"
	I0725 18:13:41.721195       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0725 18:13:41.721300       1 server.go:661] "Version info" version="v1.24.1"
	I0725 18:13:41.721304       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:13:41.721538       1 config.go:317] "Starting service config controller"
	I0725 18:13:41.721544       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0725 18:13:41.721552       1 config.go:226] "Starting endpoint slice config controller"
	I0725 18:13:41.721553       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0725 18:13:41.721758       1 config.go:444] "Starting node config controller"
	I0725 18:13:41.721760       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0725 18:13:41.822488       1 shared_informer.go:262] Caches are synced for node config
	I0725 18:13:41.822502       1 shared_informer.go:262] Caches are synced for service config
	I0725 18:13:41.822525       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a7e36ee32739] <==
	W0725 18:13:23.454385       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:13:23.454392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:13:23.454629       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 18:13:23.454968       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 18:13:23.454871       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:13:23.455070       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:13:23.454894       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:13:23.455145       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:13:23.454929       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 18:13:23.455270       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 18:13:23.454943       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 18:13:23.455321       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 18:13:23.454962       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:13:23.455353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:13:24.346233       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:13:24.346304       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:13:24.376995       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:13:24.377032       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:13:24.377127       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:13:24.377236       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:13:24.464621       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:13:24.464760       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:13:24.486708       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:13:24.486724       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0725 18:13:25.052337       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-07-25 18:08:37 UTC, ends at Thu 2024-07-25 18:17:43 UTC. --
	Jul 25 18:13:28 running-upgrade-159000 kubelet[12490]: I0725 18:13:28.502004   12490 request.go:601] Waited for 1.103699092s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 25 18:13:28 running-upgrade-159000 kubelet[12490]: E0725 18:13:28.506497   12490 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-159000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-159000"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: I0725 18:13:39.435672   12490 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: I0725 18:13:39.440819   12490 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: I0725 18:13:39.440949   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckk5\" (UniqueName: \"kubernetes.io/projected/88af6b25-0e07-4c59-8316-617d6f75415e-kube-api-access-jckk5\") pod \"storage-provisioner\" (UID: \"88af6b25-0e07-4c59-8316-617d6f75415e\") " pod="kube-system/storage-provisioner"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: I0725 18:13:39.440960   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/88af6b25-0e07-4c59-8316-617d6f75415e-tmp\") pod \"storage-provisioner\" (UID: \"88af6b25-0e07-4c59-8316-617d6f75415e\") " pod="kube-system/storage-provisioner"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: I0725 18:13:39.441170   12490 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: E0725 18:13:39.545255   12490 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: E0725 18:13:39.545269   12490 projected.go:192] Error preparing data for projected volume kube-api-access-jckk5 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 25 18:13:39 running-upgrade-159000 kubelet[12490]: E0725 18:13:39.545299   12490 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/88af6b25-0e07-4c59-8316-617d6f75415e-kube-api-access-jckk5 podName:88af6b25-0e07-4c59-8316-617d6f75415e nodeName:}" failed. No retries permitted until 2024-07-25 18:13:40.045286905 +0000 UTC m=+13.781256377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jckk5" (UniqueName: "kubernetes.io/projected/88af6b25-0e07-4c59-8316-617d6f75415e-kube-api-access-jckk5") pod "storage-provisioner" (UID: "88af6b25-0e07-4c59-8316-617d6f75415e") : configmap "kube-root-ca.crt" not found
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.286474   12490 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.383027   12490 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.388019   12490 topology_manager.go:200] "Topology Admit Handler"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449006   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/331c47a4-474d-46ab-aa3b-04c86f06dcf4-kube-proxy\") pod \"kube-proxy-8kg4s\" (UID: \"331c47a4-474d-46ab-aa3b-04c86f06dcf4\") " pod="kube-system/kube-proxy-8kg4s"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449253   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxtz\" (UniqueName: \"kubernetes.io/projected/331c47a4-474d-46ab-aa3b-04c86f06dcf4-kube-api-access-scxtz\") pod \"kube-proxy-8kg4s\" (UID: \"331c47a4-474d-46ab-aa3b-04c86f06dcf4\") " pod="kube-system/kube-proxy-8kg4s"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449272   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca0d94c4-266b-4c63-9863-a64be3412ebb-config-volume\") pod \"coredns-6d4b75cb6d-v2ms5\" (UID: \"ca0d94c4-266b-4c63-9863-a64be3412ebb\") " pod="kube-system/coredns-6d4b75cb6d-v2ms5"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449282   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331c47a4-474d-46ab-aa3b-04c86f06dcf4-xtables-lock\") pod \"kube-proxy-8kg4s\" (UID: \"331c47a4-474d-46ab-aa3b-04c86f06dcf4\") " pod="kube-system/kube-proxy-8kg4s"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449294   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331c47a4-474d-46ab-aa3b-04c86f06dcf4-lib-modules\") pod \"kube-proxy-8kg4s\" (UID: \"331c47a4-474d-46ab-aa3b-04c86f06dcf4\") " pod="kube-system/kube-proxy-8kg4s"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449312   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj2fx\" (UniqueName: \"kubernetes.io/projected/ca0d94c4-266b-4c63-9863-a64be3412ebb-kube-api-access-gj2fx\") pod \"coredns-6d4b75cb6d-v2ms5\" (UID: \"ca0d94c4-266b-4c63-9863-a64be3412ebb\") " pod="kube-system/coredns-6d4b75cb6d-v2ms5"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449323   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a491f9d5-7272-4fdd-a187-77fa6a3fedd5-config-volume\") pod \"coredns-6d4b75cb6d-fklwh\" (UID: \"a491f9d5-7272-4fdd-a187-77fa6a3fedd5\") " pod="kube-system/coredns-6d4b75cb6d-fklwh"
	Jul 25 18:13:40 running-upgrade-159000 kubelet[12490]: I0725 18:13:40.449333   12490 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65gdc\" (UniqueName: \"kubernetes.io/projected/a491f9d5-7272-4fdd-a187-77fa6a3fedd5-kube-api-access-65gdc\") pod \"coredns-6d4b75cb6d-fklwh\" (UID: \"a491f9d5-7272-4fdd-a187-77fa6a3fedd5\") " pod="kube-system/coredns-6d4b75cb6d-fklwh"
	Jul 25 18:13:41 running-upgrade-159000 kubelet[12490]: I0725 18:13:41.481772   12490 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b3220410ebc4e35a75103f915a1e5849f36fcfe6758ea80abbe9c6ede2f05277"
	Jul 25 18:13:41 running-upgrade-159000 kubelet[12490]: I0725 18:13:41.507663   12490 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="43f1f1ac7233bb5f88bae766b60e7694fd64f98d440ee7dfdaf6e501a14cdbf7"
	Jul 25 18:17:28 running-upgrade-159000 kubelet[12490]: I0725 18:17:28.757062   12490 scope.go:110] "RemoveContainer" containerID="1b057df70f635b29b145dd85b28a9c612121286fe4fc04fdc7f1840e5e9796bc"
	Jul 25 18:17:28 running-upgrade-159000 kubelet[12490]: I0725 18:17:28.772654   12490 scope.go:110] "RemoveContainer" containerID="f09e78a809a613e8c0c3eac1825511d955b9bfa74d279a11c1fae562fff94250"
	
	
	==> storage-provisioner [6a65d2a52fea] <==
	I0725 18:13:40.221298       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:13:40.226652       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:13:40.226744       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:13:40.231669       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:13:40.231900       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60b2d009-af8a-4c8d-9031-7c01e3be0c2b", APIVersion:"v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-159000_441f0c05-b2b2-4de2-a643-6766ecee614c became leader
	I0725 18:13:40.231986       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-159000_441f0c05-b2b2-4de2-a643-6766ecee614c!
	I0725 18:13:40.333305       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-159000_441f0c05-b2b2-4de2-a643-6766ecee614c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-159000 -n running-upgrade-159000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-159000 -n running-upgrade-159000: exit status 2 (15.619918042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-159000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-159000
--- FAIL: TestRunningBinaryUpgrade (592.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.840052542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-567000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-567000" primary control-plane node in "kubernetes-upgrade-567000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-567000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:11:07.440446    4764 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:11:07.440569    4764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:11:07.440573    4764 out.go:304] Setting ErrFile to fd 2...
	I0725 11:11:07.440575    4764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:11:07.440693    4764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:11:07.441893    4764 out.go:298] Setting JSON to false
	I0725 11:11:07.458000    4764 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4231,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:11:07.458072    4764 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:11:07.463702    4764 out.go:177] * [kubernetes-upgrade-567000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:11:07.470688    4764 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:11:07.470752    4764 notify.go:220] Checking for updates...
	I0725 11:11:07.477614    4764 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:11:07.480640    4764 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:11:07.483661    4764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:11:07.484900    4764 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:11:07.487616    4764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:11:07.490974    4764 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:11:07.491041    4764 config.go:182] Loaded profile config "running-upgrade-159000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:11:07.491088    4764 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:11:07.495515    4764 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:11:07.502696    4764 start.go:297] selected driver: qemu2
	I0725 11:11:07.502706    4764 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:11:07.502716    4764 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:11:07.504993    4764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:11:07.508681    4764 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:11:07.511661    4764 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 11:11:07.511686    4764 cni.go:84] Creating CNI manager for ""
	I0725 11:11:07.511692    4764 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 11:11:07.511718    4764 start.go:340] cluster config:
	{Name:kubernetes-upgrade-567000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-567000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:11:07.515041    4764 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:11:07.521661    4764 out.go:177] * Starting "kubernetes-upgrade-567000" primary control-plane node in "kubernetes-upgrade-567000" cluster
	I0725 11:11:07.525595    4764 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 11:11:07.525609    4764 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 11:11:07.525614    4764 cache.go:56] Caching tarball of preloaded images
	I0725 11:11:07.525665    4764 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:11:07.525670    4764 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0725 11:11:07.525721    4764 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kubernetes-upgrade-567000/config.json ...
	I0725 11:11:07.525731    4764 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kubernetes-upgrade-567000/config.json: {Name:mk888f31aea7566f669712bb6144b7e9274619f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:11:07.526021    4764 start.go:360] acquireMachinesLock for kubernetes-upgrade-567000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:11:07.526054    4764 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "kubernetes-upgrade-567000"
	I0725 11:11:07.526064    4764 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-567000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-567000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:11:07.526094    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:11:07.534731    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:11:07.550222    4764 start.go:159] libmachine.API.Create for "kubernetes-upgrade-567000" (driver="qemu2")
	I0725 11:11:07.550245    4764 client.go:168] LocalClient.Create starting
	I0725 11:11:07.550312    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:11:07.550343    4764 main.go:141] libmachine: Decoding PEM data...
	I0725 11:11:07.550354    4764 main.go:141] libmachine: Parsing certificate...
	I0725 11:11:07.550396    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:11:07.550418    4764 main.go:141] libmachine: Decoding PEM data...
	I0725 11:11:07.550428    4764 main.go:141] libmachine: Parsing certificate...
	I0725 11:11:07.550870    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:11:07.712504    4764 main.go:141] libmachine: Creating SSH key...
	I0725 11:11:07.786917    4764 main.go:141] libmachine: Creating Disk image...
	I0725 11:11:07.786923    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:11:07.787080    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:07.796481    4764 main.go:141] libmachine: STDOUT: 
	I0725 11:11:07.796501    4764 main.go:141] libmachine: STDERR: 
	I0725 11:11:07.796556    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2 +20000M
	I0725 11:11:07.804440    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:11:07.804453    4764 main.go:141] libmachine: STDERR: 
	I0725 11:11:07.804472    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:07.804481    4764 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:11:07.804494    4764 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:11:07.804523    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:43:59:67:e8:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:07.806149    4764 main.go:141] libmachine: STDOUT: 
	I0725 11:11:07.806162    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:11:07.806185    4764 client.go:171] duration metric: took 255.943084ms to LocalClient.Create
	I0725 11:11:09.808423    4764 start.go:128] duration metric: took 2.28236325s to createHost
	I0725 11:11:09.808509    4764 start.go:83] releasing machines lock for "kubernetes-upgrade-567000", held for 2.282501083s
	W0725 11:11:09.808646    4764 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:11:09.812978    4764 out.go:177] * Deleting "kubernetes-upgrade-567000" in qemu2 ...
	W0725 11:11:09.843334    4764 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:11:09.843363    4764 start.go:729] Will try again in 5 seconds ...
	I0725 11:11:14.845405    4764 start.go:360] acquireMachinesLock for kubernetes-upgrade-567000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:11:14.845977    4764 start.go:364] duration metric: took 481.25µs to acquireMachinesLock for "kubernetes-upgrade-567000"
	I0725 11:11:14.846053    4764 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-567000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-567000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:11:14.846316    4764 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:11:14.851869    4764 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:11:14.903161    4764 start.go:159] libmachine.API.Create for "kubernetes-upgrade-567000" (driver="qemu2")
	I0725 11:11:14.903216    4764 client.go:168] LocalClient.Create starting
	I0725 11:11:14.903444    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:11:14.903534    4764 main.go:141] libmachine: Decoding PEM data...
	I0725 11:11:14.903552    4764 main.go:141] libmachine: Parsing certificate...
	I0725 11:11:14.903642    4764 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:11:14.903689    4764 main.go:141] libmachine: Decoding PEM data...
	I0725 11:11:14.903704    4764 main.go:141] libmachine: Parsing certificate...
	I0725 11:11:14.904193    4764 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:11:15.067924    4764 main.go:141] libmachine: Creating SSH key...
	I0725 11:11:15.194120    4764 main.go:141] libmachine: Creating Disk image...
	I0725 11:11:15.194131    4764 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:11:15.194354    4764 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:15.203982    4764 main.go:141] libmachine: STDOUT: 
	I0725 11:11:15.204002    4764 main.go:141] libmachine: STDERR: 
	I0725 11:11:15.204057    4764 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2 +20000M
	I0725 11:11:15.212232    4764 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:11:15.212246    4764 main.go:141] libmachine: STDERR: 
	I0725 11:11:15.212263    4764 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:15.212271    4764 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:11:15.212281    4764 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:11:15.212309    4764 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:03:76:0e:de:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:15.213974    4764 main.go:141] libmachine: STDOUT: 
	I0725 11:11:15.213987    4764 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:11:15.213999    4764 client.go:171] duration metric: took 310.786042ms to LocalClient.Create
	I0725 11:11:17.216190    4764 start.go:128] duration metric: took 2.369838625s to createHost
	I0725 11:11:17.216259    4764 start.go:83] releasing machines lock for "kubernetes-upgrade-567000", held for 2.370328s
	W0725 11:11:17.216595    4764 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:11:17.224180    4764 out.go:177] 
	W0725 11:11:17.229254    4764 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:11:17.229268    4764 out.go:239] * 
	* 
	W0725 11:11:17.230877    4764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:11:17.240207    4764 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-567000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-567000: (3.376705958s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-567000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-567000 status --format={{.Host}}: exit status 7 (63.159875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184379417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-567000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-567000" primary control-plane node in "kubernetes-upgrade-567000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-567000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-567000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:11:20.724967    4800 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:11:20.725099    4800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:11:20.725102    4800 out.go:304] Setting ErrFile to fd 2...
	I0725 11:11:20.725105    4800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:11:20.725237    4800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:11:20.726242    4800 out.go:298] Setting JSON to false
	I0725 11:11:20.742663    4800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4244,"bootTime":1721926836,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:11:20.742737    4800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:11:20.744630    4800 out.go:177] * [kubernetes-upgrade-567000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:11:20.752495    4800 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:11:20.752607    4800 notify.go:220] Checking for updates...
	I0725 11:11:20.759419    4800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:11:20.762523    4800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:11:20.765495    4800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:11:20.768452    4800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:11:20.771463    4800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:11:20.774658    4800 config.go:182] Loaded profile config "kubernetes-upgrade-567000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0725 11:11:20.774935    4800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:11:20.779489    4800 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:11:20.785469    4800 start.go:297] selected driver: qemu2
	I0725 11:11:20.785474    4800 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-567000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-567000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:11:20.785515    4800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:11:20.787727    4800 cni.go:84] Creating CNI manager for ""
	I0725 11:11:20.787744    4800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:11:20.787773    4800 start.go:340] cluster config:
	{Name:kubernetes-upgrade-567000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-567000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:11:20.791215    4800 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:11:20.798476    4800 out.go:177] * Starting "kubernetes-upgrade-567000" primary control-plane node in "kubernetes-upgrade-567000" cluster
	I0725 11:11:20.802336    4800 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:11:20.802350    4800 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0725 11:11:20.802357    4800 cache.go:56] Caching tarball of preloaded images
	I0725 11:11:20.802404    4800 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:11:20.802409    4800 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 11:11:20.802456    4800 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kubernetes-upgrade-567000/config.json ...
	I0725 11:11:20.802868    4800 start.go:360] acquireMachinesLock for kubernetes-upgrade-567000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:11:20.802894    4800 start.go:364] duration metric: took 20.5µs to acquireMachinesLock for "kubernetes-upgrade-567000"
	I0725 11:11:20.802914    4800 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:11:20.802920    4800 fix.go:54] fixHost starting: 
	I0725 11:11:20.803031    4800 fix.go:112] recreateIfNeeded on kubernetes-upgrade-567000: state=Stopped err=<nil>
	W0725 11:11:20.803039    4800 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:11:20.811420    4800 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-567000" ...
	I0725 11:11:20.815437    4800 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:11:20.815469    4800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:03:76:0e:de:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:20.817520    4800 main.go:141] libmachine: STDOUT: 
	I0725 11:11:20.817537    4800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:11:20.817566    4800 fix.go:56] duration metric: took 14.645125ms for fixHost
	I0725 11:11:20.817572    4800 start.go:83] releasing machines lock for "kubernetes-upgrade-567000", held for 14.67375ms
	W0725 11:11:20.817577    4800 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:11:20.817608    4800 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:11:20.817613    4800 start.go:729] Will try again in 5 seconds ...
	I0725 11:11:25.819715    4800 start.go:360] acquireMachinesLock for kubernetes-upgrade-567000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:11:25.820377    4800 start.go:364] duration metric: took 515.875µs to acquireMachinesLock for "kubernetes-upgrade-567000"
	I0725 11:11:25.820511    4800 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:11:25.820534    4800 fix.go:54] fixHost starting: 
	I0725 11:11:25.821359    4800 fix.go:112] recreateIfNeeded on kubernetes-upgrade-567000: state=Stopped err=<nil>
	W0725 11:11:25.821387    4800 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:11:25.832848    4800 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-567000" ...
	I0725 11:11:25.836750    4800 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:11:25.837049    4800 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:03:76:0e:de:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubernetes-upgrade-567000/disk.qcow2
	I0725 11:11:25.847103    4800 main.go:141] libmachine: STDOUT: 
	I0725 11:11:25.847164    4800 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:11:25.847254    4800 fix.go:56] duration metric: took 26.723916ms for fixHost
	I0725 11:11:25.847273    4800 start.go:83] releasing machines lock for "kubernetes-upgrade-567000", held for 26.871666ms
	W0725 11:11:25.847454    4800 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-567000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-567000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:11:25.854794    4800 out.go:177] 
	W0725 11:11:25.857842    4800 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:11:25.857908    4800 out.go:239] * 
	* 
	W0725 11:11:25.860076    4800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:11:25.868799    4800 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-567000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-567000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-567000 version --output=json: exit status 1 (51.192083ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-567000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-25 11:11:25.932504 -0700 PDT m=+2611.871066917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-567000 -n kubernetes-upgrade-567000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-567000 -n kubernetes-upgrade-567000: exit status 7 (31.357667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-567000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-567000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-567000
--- FAIL: TestKubernetesUpgrade (18.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.68s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19326
- KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2722265383/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.68s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19326
- KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3463180305/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3882142508 start -p stopped-upgrade-820000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3882142508 start -p stopped-upgrade-820000 --memory=2200 --vm-driver=qemu2 : (40.586959666s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3882142508 -p stopped-upgrade-820000 stop
E0725 11:12:10.207998    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3882142508 -p stopped-upgrade-820000 stop: (12.119844333s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-820000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0725 11:14:15.226122    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 11:17:10.184950    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 11:17:18.276368    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-820000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.049526458s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-820000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-820000" primary control-plane node in "stopped-upgrade-820000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-820000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:12:19.842183    4843 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:12:19.842351    4843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:12:19.842356    4843 out.go:304] Setting ErrFile to fd 2...
	I0725 11:12:19.842363    4843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:12:19.842545    4843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:12:19.843739    4843 out.go:298] Setting JSON to false
	I0725 11:12:19.863206    4843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4303,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:12:19.863277    4843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:12:19.868533    4843 out.go:177] * [stopped-upgrade-820000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:12:19.876525    4843 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:12:19.876578    4843 notify.go:220] Checking for updates...
	I0725 11:12:19.883470    4843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:12:19.886511    4843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:12:19.889491    4843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:12:19.892484    4843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:12:19.895484    4843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:12:19.897190    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:12:19.900393    4843 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 11:12:19.903490    4843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:12:19.907327    4843 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:12:19.914447    4843 start.go:297] selected driver: qemu2
	I0725 11:12:19.914454    4843 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:19.914507    4843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:12:19.917078    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:12:19.917097    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:12:19.917136    4843 start.go:340] cluster config:
	{Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:19.917207    4843 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:12:19.924438    4843 out.go:177] * Starting "stopped-upgrade-820000" primary control-plane node in "stopped-upgrade-820000" cluster
	I0725 11:12:19.928458    4843 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:12:19.928472    4843 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0725 11:12:19.928477    4843 cache.go:56] Caching tarball of preloaded images
	I0725 11:12:19.928532    4843 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:12:19.928537    4843 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0725 11:12:19.928584    4843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/config.json ...
	I0725 11:12:19.928993    4843 start.go:360] acquireMachinesLock for stopped-upgrade-820000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:12:19.929020    4843 start.go:364] duration metric: took 21.083µs to acquireMachinesLock for "stopped-upgrade-820000"
	I0725 11:12:19.929029    4843 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:12:19.929034    4843 fix.go:54] fixHost starting: 
	I0725 11:12:19.929139    4843 fix.go:112] recreateIfNeeded on stopped-upgrade-820000: state=Stopped err=<nil>
	W0725 11:12:19.929148    4843 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:12:19.936483    4843 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-820000" ...
	I0725 11:12:19.939410    4843 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:12:19.939475    4843 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50463-:22,hostfwd=tcp::50464-:2376,hostname=stopped-upgrade-820000 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/disk.qcow2
	I0725 11:12:19.985166    4843 main.go:141] libmachine: STDOUT: 
	I0725 11:12:19.985191    4843 main.go:141] libmachine: STDERR: 
	I0725 11:12:19.985197    4843 main.go:141] libmachine: Waiting for VM to start (ssh -p 50463 docker@127.0.0.1)...
	I0725 11:12:39.856243    4843 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/config.json ...
	I0725 11:12:39.856631    4843 machine.go:94] provisionDockerMachine start ...
	I0725 11:12:39.856703    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:39.856950    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:39.856960    4843 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 11:12:39.931575    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0725 11:12:39.931597    4843 buildroot.go:166] provisioning hostname "stopped-upgrade-820000"
	I0725 11:12:39.931668    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:39.931845    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:39.931855    4843 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-820000 && echo "stopped-upgrade-820000" | sudo tee /etc/hostname
	I0725 11:12:40.006149    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-820000
	
	I0725 11:12:40.006206    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.006342    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.006353    4843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-820000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-820000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-820000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 11:12:40.071303    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 11:12:40.071313    4843 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19326-1196/.minikube CaCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19326-1196/.minikube}
	I0725 11:12:40.071320    4843 buildroot.go:174] setting up certificates
	I0725 11:12:40.071325    4843 provision.go:84] configureAuth start
	I0725 11:12:40.071336    4843 provision.go:143] copyHostCerts
	I0725 11:12:40.071406    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem, removing ...
	I0725 11:12:40.071414    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem
	I0725 11:12:40.071510    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.pem (1078 bytes)
	I0725 11:12:40.071678    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem, removing ...
	I0725 11:12:40.071683    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem
	I0725 11:12:40.071724    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/cert.pem (1123 bytes)
	I0725 11:12:40.071823    4843 exec_runner.go:144] found /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem, removing ...
	I0725 11:12:40.071827    4843 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem
	I0725 11:12:40.071867    4843 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19326-1196/.minikube/key.pem (1675 bytes)
	I0725 11:12:40.071952    4843 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-820000 san=[127.0.0.1 localhost minikube stopped-upgrade-820000]
	I0725 11:12:40.140982    4843 provision.go:177] copyRemoteCerts
	I0725 11:12:40.141033    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 11:12:40.141042    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.174789    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0725 11:12:40.182053    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0725 11:12:40.189025    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 11:12:40.195649    4843 provision.go:87] duration metric: took 124.323292ms to configureAuth
	I0725 11:12:40.195661    4843 buildroot.go:189] setting minikube options for container-runtime
	I0725 11:12:40.195769    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:12:40.195807    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.195894    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.195898    4843 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0725 11:12:40.260623    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0725 11:12:40.260631    4843 buildroot.go:70] root file system type: tmpfs
	I0725 11:12:40.260686    4843 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0725 11:12:40.260739    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.260866    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.260903    4843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0725 11:12:40.332436    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0725 11:12:40.332494    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.332610    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.332620    4843 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0725 11:12:40.688212    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0725 11:12:40.688227    4843 machine.go:97] duration metric: took 831.612833ms to provisionDockerMachine
	I0725 11:12:40.688234    4843 start.go:293] postStartSetup for "stopped-upgrade-820000" (driver="qemu2")
	I0725 11:12:40.688241    4843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 11:12:40.688310    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 11:12:40.688321    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.724789    4843 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 11:12:40.725976    4843 info.go:137] Remote host: Buildroot 2021.02.12
	I0725 11:12:40.725982    4843 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/addons for local assets ...
	I0725 11:12:40.726051    4843 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19326-1196/.minikube/files for local assets ...
	I0725 11:12:40.726144    4843 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem -> 16942.pem in /etc/ssl/certs
	I0725 11:12:40.726235    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 11:12:40.728590    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:12:40.735258    4843 start.go:296] duration metric: took 47.020416ms for postStartSetup
	I0725 11:12:40.735272    4843 fix.go:56] duration metric: took 20.806855792s for fixHost
	I0725 11:12:40.735306    4843 main.go:141] libmachine: Using SSH client type: native
	I0725 11:12:40.735410    4843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1050fea10] 0x105101270 <nil>  [] 0s} localhost 50463 <nil> <nil>}
	I0725 11:12:40.735415    4843 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0725 11:12:40.800100    4843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721931161.020804296
	
	I0725 11:12:40.800107    4843 fix.go:216] guest clock: 1721931161.020804296
	I0725 11:12:40.800112    4843 fix.go:229] Guest: 2024-07-25 11:12:41.020804296 -0700 PDT Remote: 2024-07-25 11:12:40.735274 -0700 PDT m=+20.925081542 (delta=285.530296ms)
	I0725 11:12:40.800125    4843 fix.go:200] guest clock delta is within tolerance: 285.530296ms
	I0725 11:12:40.800129    4843 start.go:83] releasing machines lock for "stopped-upgrade-820000", held for 20.871722334s
	I0725 11:12:40.800190    4843 ssh_runner.go:195] Run: cat /version.json
	I0725 11:12:40.800203    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:12:40.800190    4843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 11:12:40.800242    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	W0725 11:12:40.800782    4843 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50463: connect: connection refused
	I0725 11:12:40.800805    4843 retry.go:31] will retry after 251.104985ms: dial tcp [::1]:50463: connect: connection refused
	W0725 11:12:41.086975    4843 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0725 11:12:41.087056    4843 ssh_runner.go:195] Run: systemctl --version
	I0725 11:12:41.089049    4843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0725 11:12:41.090688    4843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0725 11:12:41.090721    4843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0725 11:12:41.093692    4843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0725 11:12:41.099405    4843 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0725 11:12:41.099416    4843 start.go:495] detecting cgroup driver to use...
	I0725 11:12:41.099493    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:12:41.109062    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0725 11:12:41.113072    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 11:12:41.118077    4843 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 11:12:41.118133    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 11:12:41.121721    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:12:41.124698    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 11:12:41.127519    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 11:12:41.130709    4843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 11:12:41.134207    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 11:12:41.137632    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 11:12:41.140627    4843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 11:12:41.143532    4843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 11:12:41.146676    4843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 11:12:41.150867    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:41.226461    4843 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 11:12:41.232742    4843 start.go:495] detecting cgroup driver to use...
	I0725 11:12:41.232798    4843 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0725 11:12:41.241342    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:12:41.245770    4843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0725 11:12:41.251523    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0725 11:12:41.256366    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:12:41.261172    4843 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 11:12:41.321126    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 11:12:41.326511    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 11:12:41.332267    4843 ssh_runner.go:195] Run: which cri-dockerd
	I0725 11:12:41.333398    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0725 11:12:41.335871    4843 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0725 11:12:41.340594    4843 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0725 11:12:41.418379    4843 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0725 11:12:41.497638    4843 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0725 11:12:41.497701    4843 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0725 11:12:41.503121    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:41.570848    4843 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:12:42.733934    4843 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163102833s)
	I0725 11:12:42.734003    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0725 11:12:42.738805    4843 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0725 11:12:42.745017    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:12:42.749580    4843 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0725 11:12:42.833603    4843 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0725 11:12:42.910192    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:42.986321    4843 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0725 11:12:42.992525    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0725 11:12:42.997441    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:43.075957    4843 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0725 11:12:43.113887    4843 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0725 11:12:43.113971    4843 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0725 11:12:43.115986    4843 start.go:563] Will wait 60s for crictl version
	I0725 11:12:43.116036    4843 ssh_runner.go:195] Run: which crictl
	I0725 11:12:43.117484    4843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 11:12:43.131681    4843 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0725 11:12:43.131743    4843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:12:43.147165    4843 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0725 11:12:43.169864    4843 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0725 11:12:43.169992    4843 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0725 11:12:43.171188    4843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 11:12:43.174787    4843 kubeadm.go:883] updating cluster {Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0725 11:12:43.174832    4843 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0725 11:12:43.174879    4843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:12:43.184998    4843 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:12:43.185007    4843 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:12:43.185049    4843 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:12:43.188140    4843 ssh_runner.go:195] Run: which lz4
	I0725 11:12:43.189418    4843 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0725 11:12:43.190658    4843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0725 11:12:43.190678    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0725 11:12:44.078017    4843 docker.go:649] duration metric: took 888.655959ms to copy over tarball
	I0725 11:12:44.078074    4843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0725 11:12:45.236532    4843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.158478125s)
	I0725 11:12:45.236545    4843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0725 11:12:45.251835    4843 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0725 11:12:45.255095    4843 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0725 11:12:45.260519    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:45.342527    4843 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0725 11:12:46.807772    4843 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.465269625s)
	I0725 11:12:46.807865    4843 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0725 11:12:46.819533    4843 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0725 11:12:46.819543    4843 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0725 11:12:46.819548    4843 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0725 11:12:46.823647    4843 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:46.825754    4843 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:46.827702    4843 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:46.827711    4843 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:46.830017    4843 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:46.830198    4843 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:46.832565    4843 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:46.832617    4843 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:46.834474    4843 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:46.834595    4843 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:46.836000    4843 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:46.836039    4843 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:46.837498    4843 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:46.837524    4843 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0725 11:12:46.838366    4843 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:46.839744    4843 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0725 11:12:47.283601    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.286246    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.296015    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.299023    4843 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0725 11:12:47.299060    4843 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.299108    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0725 11:12:47.311874    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:47.322992    4843 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0725 11:12:47.323013    4843 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.323065    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0725 11:12:47.323207    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0725 11:12:47.323335    4843 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0725 11:12:47.323345    4843 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.323367    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0725 11:12:47.327444    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.331144    4843 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0725 11:12:47.331165    4843 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0725 11:12:47.331210    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0725 11:12:47.337134    4843 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0725 11:12:47.337287    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.343610    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0725 11:12:47.346622    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0725 11:12:47.346997    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0725 11:12:47.358934    4843 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0725 11:12:47.358957    4843 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.358959    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0725 11:12:47.359003    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0725 11:12:47.364266    4843 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0725 11:12:47.364290    4843 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.364343    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0725 11:12:47.378681    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0725 11:12:47.378677    4843 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0725 11:12:47.378741    4843 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0725 11:12:47.378781    4843 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0725 11:12:47.378787    4843 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:12:47.385282    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0725 11:12:47.385400    4843 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:12:47.389762    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0725 11:12:47.389778    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0725 11:12:47.389789    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0725 11:12:47.389835    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0725 11:12:47.389842    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0725 11:12:47.389857    4843 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0725 11:12:47.400914    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0725 11:12:47.400944    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0725 11:12:47.438416    4843 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0725 11:12:47.438430    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0725 11:12:47.541537    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0725 11:12:47.541555    4843 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0725 11:12:47.541561    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W0725 11:12:47.625886    4843 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0725 11:12:47.626001    4843 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.656363    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0725 11:12:47.659181    4843 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0725 11:12:47.659204    4843 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.659259    4843 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:12:47.682808    4843 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0725 11:12:47.682934    4843 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:12:47.695433    4843 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0725 11:12:47.695459    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0725 11:12:47.742397    4843 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0725 11:12:47.742416    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0725 11:12:47.880765    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0725 11:12:47.880790    4843 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0725 11:12:47.880796    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0725 11:12:48.112279    4843 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0725 11:12:48.112318    4843 cache_images.go:92] duration metric: took 1.292801417s to LoadCachedImages
	W0725 11:12:48.112360    4843 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0725 11:12:48.112365    4843 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0725 11:12:48.112421    4843 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-820000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 11:12:48.112481    4843 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0725 11:12:48.127716    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:12:48.127730    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:12:48.127734    4843 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 11:12:48.127743    4843 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-820000 NodeName:stopped-upgrade-820000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 11:12:48.127802    4843 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-820000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 11:12:48.127856    4843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0725 11:12:48.131058    4843 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 11:12:48.131084    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 11:12:48.134194    4843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0725 11:12:48.139379    4843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 11:12:48.144486    4843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0725 11:12:48.149457    4843 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0725 11:12:48.150694    4843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 11:12:48.154784    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:12:48.235994    4843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:12:48.245810    4843 certs.go:68] Setting up /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000 for IP: 10.0.2.15
	I0725 11:12:48.245820    4843 certs.go:194] generating shared ca certs ...
	I0725 11:12:48.245828    4843 certs.go:226] acquiring lock for ca certs: {Name:mk89636080cfada095e98fa6c0bd32580553affa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.246012    4843 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key
	I0725 11:12:48.246050    4843 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key
	I0725 11:12:48.246060    4843 certs.go:256] generating profile certs ...
	I0725 11:12:48.246131    4843 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key
	I0725 11:12:48.246149    4843 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42
	I0725 11:12:48.246159    4843 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0725 11:12:48.337978    4843 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 ...
	I0725 11:12:48.337991    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42: {Name:mkebcf6c4eabab22499b8d04e2fb92fba722ab86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.338302    4843 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42 ...
	I0725 11:12:48.338307    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42: {Name:mk0ade813cce628ed63ee06b37d15229e2dc78bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.338440    4843 certs.go:381] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt.9f093d42 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt
	I0725 11:12:48.338745    4843 certs.go:385] copying /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key.9f093d42 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key
	I0725 11:12:48.338901    4843 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.key
	I0725 11:12:48.339054    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem (1338 bytes)
	W0725 11:12:48.339081    4843 certs.go:480] ignoring /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694_empty.pem, impossibly tiny 0 bytes
	I0725 11:12:48.339086    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca-key.pem (1679 bytes)
	I0725 11:12:48.339106    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem (1078 bytes)
	I0725 11:12:48.339125    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem (1123 bytes)
	I0725 11:12:48.339147    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/key.pem (1675 bytes)
	I0725 11:12:48.339194    4843 certs.go:484] found cert: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem (1708 bytes)
	I0725 11:12:48.339553    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 11:12:48.346730    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 11:12:48.353803    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 11:12:48.361118    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0725 11:12:48.368106    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 11:12:48.374923    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 11:12:48.382314    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 11:12:48.389327    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 11:12:48.396085    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 11:12:48.402758    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/1694.pem --> /usr/share/ca-certificates/1694.pem (1338 bytes)
	I0725 11:12:48.410162    4843 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/ssl/certs/16942.pem --> /usr/share/ca-certificates/16942.pem (1708 bytes)
	I0725 11:12:48.417153    4843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 11:12:48.422302    4843 ssh_runner.go:195] Run: openssl version
	I0725 11:12:48.424333    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 11:12:48.427350    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.428882    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 17:29 /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.428903    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 11:12:48.430632    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 11:12:48.434005    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1694.pem && ln -fs /usr/share/ca-certificates/1694.pem /etc/ssl/certs/1694.pem"
	I0725 11:12:48.436860    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.438202    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 17:36 /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.438221    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1694.pem
	I0725 11:12:48.440296    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1694.pem /etc/ssl/certs/51391683.0"
	I0725 11:12:48.443298    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16942.pem && ln -fs /usr/share/ca-certificates/16942.pem /etc/ssl/certs/16942.pem"
	I0725 11:12:48.446552    4843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.447974    4843 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 17:36 /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.447996    4843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16942.pem
	I0725 11:12:48.449676    4843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16942.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 11:12:48.452497    4843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 11:12:48.453966    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 11:12:48.456094    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 11:12:48.457836    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 11:12:48.460321    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 11:12:48.461962    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 11:12:48.463632    4843 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 11:12:48.465509    4843 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-820000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50498 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-820000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0725 11:12:48.465579    4843 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:12:48.477919    4843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 11:12:48.481073    4843 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 11:12:48.481082    4843 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 11:12:48.481107    4843 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 11:12:48.483809    4843 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 11:12:48.484088    4843 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-820000" does not appear in /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:12:48.484187    4843 kubeconfig.go:62] /Users/jenkins/minikube-integration/19326-1196/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-820000" cluster setting kubeconfig missing "stopped-upgrade-820000" context setting]
	I0725 11:12:48.484404    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:12:48.484881    4843 kapi.go:59] client config for stopped-upgrade-820000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106493fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:12:48.485301    4843 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 11:12:48.487820    4843 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-820000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0725 11:12:48.487825    4843 kubeadm.go:1160] stopping kube-system containers ...
	I0725 11:12:48.487861    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0725 11:12:48.498279    4843 docker.go:483] Stopping containers: [42523f7ee731 84ce05051b4f 255915f3e59c 10b2277d1125 7b567558ab7f 9c1204c98245 34a564d49a8e d27309cceaaf]
	I0725 11:12:48.498336    4843 ssh_runner.go:195] Run: docker stop 42523f7ee731 84ce05051b4f 255915f3e59c 10b2277d1125 7b567558ab7f 9c1204c98245 34a564d49a8e d27309cceaaf
	I0725 11:12:48.508833    4843 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0725 11:12:48.514448    4843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:12:48.517839    4843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:12:48.517844    4843 kubeadm.go:157] found existing configuration files:
	
	I0725 11:12:48.517868    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0725 11:12:48.520366    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:12:48.520389    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:12:48.523006    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0725 11:12:48.526238    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:12:48.526261    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:12:48.529048    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0725 11:12:48.531579    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:12:48.531598    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:12:48.534620    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0725 11:12:48.537667    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:12:48.537691    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:12:48.540201    4843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:12:48.542952    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:48.564793    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.176445    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.298042    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.323775    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0725 11:12:49.342474    4843 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:12:49.342554    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:49.843758    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:50.343548    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:12:50.347820    4843 api_server.go:72] duration metric: took 1.005377125s to wait for apiserver process to appear ...
	I0725 11:12:50.347830    4843 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:12:50.347844    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:12:55.348377    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:12:55.348406    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:00.348904    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:00.348987    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:05.349448    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:05.349466    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:10.349647    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:10.349665    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:15.349863    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:15.349910    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:20.350271    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:20.350298    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:25.350850    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:25.350901    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:30.351722    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:30.351769    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:35.352934    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:35.353019    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:40.354656    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:40.354679    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:45.354882    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:45.354964    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:50.356902    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:50.357021    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:50.369315    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:13:50.369381    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:50.379725    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:13:50.379796    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:50.390901    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:13:50.390983    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:50.403064    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:13:50.403153    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:50.414966    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:13:50.415033    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:50.426164    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:13:50.426230    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:50.436185    4843 logs.go:276] 0 containers: []
	W0725 11:13:50.436195    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:50.436244    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:50.446814    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:13:50.446830    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:13:50.446835    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:13:50.461544    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:13:50.461559    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:13:50.477287    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:13:50.477301    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:13:50.491903    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:13:50.491914    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:13:50.507674    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:13:50.507688    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:13:50.519121    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:50.519131    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:50.557094    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:50.557102    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:50.561048    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:13:50.561058    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:13:50.603448    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:13:50.603458    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:13:50.615718    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:13:50.615729    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:13:50.633043    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:13:50.633053    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:13:50.647974    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:13:50.647994    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:13:50.659695    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:13:50.659706    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:13:50.671688    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:50.671699    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:50.697283    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:13:50.697289    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:50.709018    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:50.709031    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:50.815033    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:13:50.815044    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:13:53.331050    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:13:58.333299    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:13:58.333534    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:13:58.353333    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:13:58.353419    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:13:58.366752    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:13:58.366821    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:13:58.377967    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:13:58.378045    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:13:58.393746    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:13:58.393810    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:13:58.404327    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:13:58.404395    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:13:58.419123    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:13:58.419196    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:13:58.429037    4843 logs.go:276] 0 containers: []
	W0725 11:13:58.429049    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:13:58.429105    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:13:58.439370    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:13:58.439385    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:13:58.439391    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:13:58.452964    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:13:58.452976    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:13:58.464767    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:13:58.464778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:13:58.477665    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:13:58.477679    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:13:58.498470    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:13:58.498483    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:13:58.510767    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:13:58.510778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:13:58.529503    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:13:58.529516    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:13:58.541302    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:13:58.541314    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:13:58.557162    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:13:58.557175    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:13:58.574182    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:13:58.574192    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:13:58.599457    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:13:58.599466    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:13:58.637950    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:13:58.637965    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:13:58.649516    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:13:58.649527    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:13:58.687582    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:13:58.687593    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:13:58.691879    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:13:58.691886    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:13:58.728806    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:13:58.728822    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:13:58.743882    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:13:58.743896    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:01.260615    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:06.261175    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:06.261389    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:06.280909    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:06.280989    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:06.295095    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:06.295167    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:06.307275    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:06.307340    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:06.317810    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:06.317880    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:06.328155    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:06.328228    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:06.339293    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:06.339362    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:06.354394    4843 logs.go:276] 0 containers: []
	W0725 11:14:06.354408    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:06.354468    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:06.365453    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:06.365470    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:06.365475    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:06.404590    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:06.404600    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:06.422988    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:06.422998    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:06.447800    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:06.447809    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:06.461900    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:06.461910    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:06.475591    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:06.475601    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:06.487377    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:06.487387    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:06.501321    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:06.501332    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:06.517232    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:06.517242    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:06.532145    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:06.532154    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:06.546903    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:06.546914    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:06.558865    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:06.558875    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:06.579631    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:06.579644    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:06.591759    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:06.591769    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:06.596379    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:06.596386    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:06.634199    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:06.634210    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:06.672173    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:06.672184    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:09.185459    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:14.187639    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:14.187745    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:14.199682    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:14.199758    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:14.210773    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:14.210841    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:14.221005    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:14.221064    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:14.231272    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:14.231346    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:14.241464    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:14.241530    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:14.251964    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:14.252033    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:14.261858    4843 logs.go:276] 0 containers: []
	W0725 11:14:14.261868    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:14.261927    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:14.272529    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:14.272545    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:14.272550    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:14.311441    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:14.311454    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:14.315931    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:14.315939    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:14.352933    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:14.352947    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:14.364976    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:14.364987    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:14.383141    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:14.383154    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:14.396949    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:14.396962    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:14.409286    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:14.409297    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:14.426298    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:14.426308    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:14.437529    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:14.437540    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:14.448886    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:14.448897    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:14.486701    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:14.486711    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:14.501320    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:14.501329    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:14.516255    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:14.516269    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:14.527897    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:14.527912    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:14.554845    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:14.554870    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:14.571592    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:14.571605    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:17.088848    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:22.091480    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:22.091836    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:22.129143    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:22.129290    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:22.150266    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:22.150364    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:22.165523    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:22.165602    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:22.181830    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:22.181897    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:22.192573    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:22.192640    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:22.209384    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:22.209454    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:22.219940    4843 logs.go:276] 0 containers: []
	W0725 11:14:22.219952    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:22.220012    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:22.230786    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:22.230803    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:22.230808    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:22.242148    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:22.242159    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:22.280060    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:22.280068    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:22.294384    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:22.294394    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:22.306568    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:22.306580    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:22.321357    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:22.321368    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:22.332958    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:22.332973    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:22.345346    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:22.345357    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:22.384969    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:22.384980    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:22.398625    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:22.398635    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:22.410382    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:22.410398    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:22.427471    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:22.427482    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:22.451679    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:22.451690    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:22.455826    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:22.455834    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:22.497745    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:22.497757    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:22.512390    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:22.512403    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:22.524145    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:22.524159    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:25.041845    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:30.044255    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:30.044669    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:30.084964    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:30.085102    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:30.106455    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:30.106548    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:30.121330    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:30.121395    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:30.133991    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:30.134066    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:30.144948    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:30.145010    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:30.156022    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:30.156088    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:30.166273    4843 logs.go:276] 0 containers: []
	W0725 11:14:30.166287    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:30.166347    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:30.176462    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:30.176482    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:30.176487    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:30.191523    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:30.191532    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:30.207652    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:30.207664    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:30.211838    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:30.211847    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:30.226743    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:30.226755    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:30.239877    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:30.239889    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:30.252000    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:30.252012    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:30.270119    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:30.270129    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:30.305208    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:30.305219    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:30.317153    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:30.317165    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:30.332566    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:30.332577    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:30.357996    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:30.358008    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:30.375487    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:30.375499    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:30.389125    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:30.389141    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:30.426244    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:30.426256    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:30.468220    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:30.468233    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:30.481304    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:30.481315    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:32.995125    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:37.997790    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:37.997999    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:38.023246    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:38.023344    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:38.038972    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:38.039050    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:38.051579    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:38.051651    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:38.063054    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:38.063125    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:38.073563    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:38.073621    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:38.084177    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:38.084237    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:38.094039    4843 logs.go:276] 0 containers: []
	W0725 11:14:38.094051    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:38.094109    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:38.104646    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:38.104664    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:38.104670    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:38.118328    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:38.118342    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:38.133963    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:38.133973    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:38.145896    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:38.145906    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:38.183124    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:38.183134    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:38.200025    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:38.200036    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:38.214160    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:38.214169    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:38.238023    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:38.238034    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:38.241943    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:38.241950    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:38.278843    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:38.278853    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:38.295324    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:38.295335    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:38.310245    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:38.310259    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:38.332236    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:38.332247    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:38.345572    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:38.345586    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:38.390815    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:38.390829    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:38.402722    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:38.402735    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:38.414839    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:38.414850    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:40.931708    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:45.933368    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:45.933534    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:45.949593    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:45.949673    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:45.962512    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:45.962587    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:45.973530    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:45.973596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:45.984430    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:45.984501    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:45.994829    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:45.994887    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:46.005405    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:46.005467    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:46.016162    4843 logs.go:276] 0 containers: []
	W0725 11:14:46.016173    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:46.016228    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:46.030119    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:46.030139    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:46.030145    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:46.044282    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:46.044292    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:46.058361    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:46.058373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:46.073429    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:46.073439    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:46.090105    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:46.090116    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:46.104118    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:46.104128    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:46.116222    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:46.116234    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:46.155335    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:46.155349    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:46.171363    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:46.171373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:46.183135    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:46.183145    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:46.195869    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:46.195880    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:46.210591    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:46.210602    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:46.246620    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:46.246630    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:46.250667    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:46.250677    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:46.262076    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:46.262092    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:46.286321    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:46.286329    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:46.297963    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:46.297975    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:48.838332    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:14:53.837772    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:14:53.837924    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:14:53.848948    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:14:53.849032    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:14:53.860007    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:14:53.860078    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:14:53.870445    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:14:53.870508    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:14:53.882961    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:14:53.883036    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:14:53.893283    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:14:53.893352    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:14:53.903532    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:14:53.903596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:14:53.913594    4843 logs.go:276] 0 containers: []
	W0725 11:14:53.913606    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:14:53.913667    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:14:53.924047    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:14:53.924066    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:14:53.924071    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:14:53.939540    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:14:53.939551    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:14:53.976493    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:14:53.976503    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:14:53.990587    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:14:53.990601    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:14:54.002206    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:14:54.002219    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:14:54.006420    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:14:54.006428    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:14:54.018563    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:14:54.018574    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:14:54.030924    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:14:54.030935    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:14:54.054916    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:14:54.054923    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:14:54.091320    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:14:54.091328    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:14:54.105707    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:14:54.105716    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:14:54.118571    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:14:54.118582    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:14:54.136227    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:14:54.136241    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:14:54.155924    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:14:54.155937    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:14:54.167356    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:14:54.167371    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:14:54.179468    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:14:54.179480    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:14:54.217965    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:14:54.217978    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:14:56.732933    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:01.733379    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:01.733539    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:01.748274    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:01.748355    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:01.765937    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:01.766006    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:01.776463    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:01.776535    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:01.787108    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:01.787178    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:01.797280    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:01.797348    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:01.808230    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:01.808296    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:01.818386    4843 logs.go:276] 0 containers: []
	W0725 11:15:01.818397    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:01.818446    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:01.833774    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:01.833793    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:01.833799    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:01.848235    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:01.848246    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:01.859723    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:01.859735    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:01.874743    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:01.874753    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:01.912469    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:01.912479    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:01.916698    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:01.916707    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:01.954087    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:01.954099    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:01.969744    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:01.969755    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:01.983356    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:01.983368    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:02.005663    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:02.005675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:02.023433    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:02.023444    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:02.034748    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:02.034761    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:02.045787    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:02.045800    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:02.058016    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:02.058027    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:02.093408    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:02.093418    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:02.109868    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:02.109882    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:02.125034    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:02.125047    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:04.651242    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:09.652264    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:09.652490    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:09.674413    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:09.674500    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:09.686860    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:09.686934    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:09.698181    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:09.698249    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:09.712438    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:09.712516    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:09.722526    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:09.722600    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:09.733080    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:09.733150    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:09.743987    4843 logs.go:276] 0 containers: []
	W0725 11:15:09.743999    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:09.744055    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:09.754424    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:09.754451    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:09.754456    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:09.766669    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:09.766702    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:09.802725    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:09.802737    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:09.818362    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:09.818374    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:09.832811    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:09.832823    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:09.848649    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:09.848660    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:09.865561    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:09.865571    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:09.877253    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:09.877263    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:09.889372    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:09.889382    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:09.901380    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:09.901390    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:09.916093    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:09.916102    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:09.927427    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:09.927440    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:09.931632    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:09.931641    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:09.956078    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:09.956087    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:09.993640    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:09.993650    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:10.007831    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:10.007841    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:10.044740    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:10.044753    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:12.558129    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:17.558584    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:17.558836    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:17.587378    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:17.587504    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:17.606000    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:17.606100    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:17.629425    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:17.629503    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:17.640398    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:17.640465    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:17.650538    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:17.650610    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:17.661433    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:17.661498    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:17.671546    4843 logs.go:276] 0 containers: []
	W0725 11:15:17.671556    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:17.671609    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:17.682135    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:17.682152    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:17.682156    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:17.696849    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:17.696858    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:17.708474    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:17.708486    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:17.720597    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:17.720609    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:17.757494    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:17.757508    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:17.770670    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:17.770682    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:17.782568    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:17.782579    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:17.820494    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:17.820501    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:17.857067    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:17.857077    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:17.868481    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:17.868492    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:17.881540    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:17.881551    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:17.902074    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:17.902090    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:17.924488    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:17.924499    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:17.942402    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:17.942413    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:17.956074    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:17.956085    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:17.971512    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:17.971526    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:17.976043    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:17.976052    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:20.491105    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:25.493005    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:25.493363    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:25.539204    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:25.539347    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:25.558830    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:25.558925    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:25.573387    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:25.573458    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:25.587818    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:25.587895    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:25.598510    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:25.598572    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:25.609644    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:25.609712    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:25.621518    4843 logs.go:276] 0 containers: []
	W0725 11:15:25.621534    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:25.621589    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:25.632059    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:25.632075    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:25.632081    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:25.646706    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:25.646718    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:25.665810    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:25.665824    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:25.677589    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:25.677603    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:25.693026    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:25.693037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:25.735472    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:25.735483    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:25.747784    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:25.747798    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:25.759126    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:25.759140    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:25.797982    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:25.797995    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:25.814710    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:25.814722    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:25.828449    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:25.828460    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:25.846396    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:25.846409    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:25.858027    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:25.858037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:25.876453    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:25.876463    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:25.899371    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:25.899381    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:25.911114    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:25.911124    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:25.915894    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:25.915901    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:28.452431    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:33.454876    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:33.455060    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:33.479550    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:33.479655    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:33.495136    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:33.495216    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:33.507910    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:33.507979    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:33.523368    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:33.523431    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:33.534015    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:33.534088    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:33.545235    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:33.545307    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:33.554993    4843 logs.go:276] 0 containers: []
	W0725 11:15:33.555005    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:33.555061    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:33.565187    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:33.565208    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:33.565213    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:33.581878    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:33.581890    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:33.592802    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:33.592815    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:33.610367    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:33.610377    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:33.625543    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:33.625553    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:33.637007    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:33.637019    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:33.661002    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:33.661012    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:33.675318    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:33.675329    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:33.713677    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:33.713692    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:33.728139    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:33.728150    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:33.740155    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:33.740169    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:33.744281    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:33.744288    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:33.758261    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:33.758270    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:33.769640    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:33.769654    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:33.808487    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:33.808495    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:33.842884    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:33.842897    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:33.854789    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:33.854798    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:36.367684    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:41.369805    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:41.369965    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:41.384684    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:41.384766    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:41.400390    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:41.400448    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:41.410793    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:41.410853    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:41.421021    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:41.421086    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:41.431048    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:41.431105    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:41.442113    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:41.442170    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:41.452699    4843 logs.go:276] 0 containers: []
	W0725 11:15:41.452709    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:41.452755    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:41.464985    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:41.465000    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:41.465006    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:41.476676    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:41.476690    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:41.487415    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:41.487426    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:41.499767    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:41.499777    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:41.536370    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:41.536381    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:41.547667    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:41.547679    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:41.559850    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:41.559860    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:41.577985    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:41.577995    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:41.595028    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:41.595037    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:41.609481    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:41.609491    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:41.647766    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:41.647777    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:41.662035    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:41.662045    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:41.673349    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:41.673360    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:41.677488    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:41.677494    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:41.713272    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:41.713283    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:41.736362    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:41.736373    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:41.753975    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:41.753988    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:44.272581    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:49.273710    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:49.273960    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:49.303639    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:49.303748    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:49.322579    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:49.322669    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:49.338525    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:49.338603    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:49.350776    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:49.350854    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:49.362129    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:49.362198    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:49.373505    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:49.373566    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:49.384079    4843 logs.go:276] 0 containers: []
	W0725 11:15:49.384090    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:49.384147    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:49.394622    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:49.394644    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:49.394649    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:49.406223    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:49.406234    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:49.452952    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:49.452964    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:49.468466    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:49.468481    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:49.483795    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:49.483805    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:49.495229    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:49.495240    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:49.519543    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:49.519558    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:49.524332    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:49.524340    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:49.538997    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:49.539008    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:49.551270    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:49.551284    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:15:49.563076    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:49.563087    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:49.578408    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:49.578419    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:49.596911    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:49.596920    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:49.610800    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:49.610810    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:49.622853    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:49.622864    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:49.640917    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:49.640927    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:49.678482    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:49.678489    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:52.215401    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:15:57.217464    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:15:57.217661    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:15:57.236232    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:15:57.236323    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:15:57.252091    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:15:57.252166    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:15:57.264366    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:15:57.264436    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:15:57.274967    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:15:57.275035    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:15:57.286024    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:15:57.286086    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:15:57.296942    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:15:57.297007    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:15:57.306783    4843 logs.go:276] 0 containers: []
	W0725 11:15:57.306795    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:15:57.306848    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:15:57.317727    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:15:57.317747    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:15:57.317754    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:15:57.332650    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:15:57.332662    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:15:57.344756    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:15:57.344766    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:15:57.356041    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:15:57.356051    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:15:57.390801    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:15:57.390812    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:15:57.427387    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:15:57.427398    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:15:57.441210    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:15:57.441222    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:15:57.452702    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:15:57.452712    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:15:57.464296    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:15:57.464306    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:15:57.468649    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:15:57.468655    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:15:57.487435    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:15:57.487444    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:15:57.506785    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:15:57.506795    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:15:57.523502    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:15:57.523512    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:15:57.561831    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:15:57.561841    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:15:57.576747    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:15:57.576757    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:15:57.587755    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:15:57.587764    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:15:57.610132    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:15:57.610140    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:00.123962    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:05.126151    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:05.126270    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:05.141827    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:05.141905    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:05.154266    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:05.154341    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:05.165100    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:05.165170    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:05.180638    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:05.180710    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:05.191465    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:05.191541    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:05.202197    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:05.202267    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:05.212556    4843 logs.go:276] 0 containers: []
	W0725 11:16:05.212566    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:05.212627    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:05.223020    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:05.223039    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:05.223044    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:05.234555    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:05.234570    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:05.246089    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:05.246099    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:05.284640    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:05.284661    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:05.289556    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:05.289563    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:05.303816    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:05.303829    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:05.315922    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:05.315940    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:05.331852    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:05.331861    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:05.351404    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:05.351414    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:05.366770    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:05.366780    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:05.390394    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:05.390402    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:05.424263    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:05.424272    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:05.437852    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:05.437866    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:05.453521    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:05.453532    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:05.465865    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:05.465877    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:05.503612    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:05.503624    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:05.515499    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:05.515514    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:08.029561    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:13.031781    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:13.031920    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:13.043764    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:13.043837    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:13.054208    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:13.054280    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:13.064904    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:13.064974    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:13.076298    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:13.076368    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:13.090702    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:13.090776    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:13.101754    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:13.101822    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:13.112532    4843 logs.go:276] 0 containers: []
	W0725 11:16:13.112542    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:13.112596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:13.123560    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:13.123577    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:13.123583    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:13.147108    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:13.147119    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:13.159309    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:13.159319    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:13.171398    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:13.171407    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:13.185030    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:13.185041    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:13.200351    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:13.200361    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:13.216218    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:13.216231    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:13.230221    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:13.230234    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:13.250275    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:13.250285    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:13.272035    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:13.272048    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:13.284691    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:13.284702    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:13.295603    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:13.295612    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:13.307504    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:13.307517    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:13.345586    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:13.345595    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:13.350280    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:13.350290    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:13.385278    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:13.385291    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:13.423138    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:13.423149    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:15.941557    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:20.944040    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:20.944192    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:20.959959    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:20.960031    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:20.972198    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:20.972276    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:20.983250    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:20.983325    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:20.994373    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:20.994440    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:21.011683    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:21.011752    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:21.024790    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:21.024866    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:21.035525    4843 logs.go:276] 0 containers: []
	W0725 11:16:21.035539    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:21.035596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:21.045908    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:21.045929    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:21.045935    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:21.060076    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:21.060089    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:21.071300    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:21.071310    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:21.082911    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:21.082924    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:21.098289    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:21.098300    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:21.109602    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:21.109616    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:21.148570    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:21.148582    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:21.163393    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:21.163402    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:21.174537    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:21.174547    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:21.213234    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:21.213250    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:21.217709    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:21.217716    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:21.229658    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:21.229681    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:21.252992    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:21.253000    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:21.288600    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:21.288609    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:21.303187    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:21.303199    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:21.318530    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:21.318541    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:21.332616    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:21.332627    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:23.851968    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:28.854337    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:28.854697    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:28.885634    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:28.885755    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:28.904107    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:28.904190    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:28.917371    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:28.917443    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:28.930429    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:28.930497    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:28.940805    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:28.940867    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:28.951338    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:28.951407    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:28.961232    4843 logs.go:276] 0 containers: []
	W0725 11:16:28.961242    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:28.961292    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:28.971849    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:28.971867    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:28.971872    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:28.985545    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:28.985556    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:29.010055    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:29.010064    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:29.014397    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:29.014403    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:29.031309    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:29.031319    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:29.045575    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:29.045585    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:29.057277    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:29.057289    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:29.094656    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:29.094675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:29.132320    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:29.132334    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:29.149527    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:29.149540    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:29.161306    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:29.161331    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:29.172416    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:29.172428    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:29.210767    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:29.210778    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:29.227465    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:29.227476    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:29.243339    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:29.243349    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:29.254597    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:29.254613    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:29.266240    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:29.266250    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:31.779777    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:36.782227    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:36.782467    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:36.802661    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:36.802766    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:36.816396    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:36.816475    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:36.832724    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:36.832794    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:36.843101    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:36.843177    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:36.855303    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:36.855368    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:36.868810    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:36.868884    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:36.879756    4843 logs.go:276] 0 containers: []
	W0725 11:16:36.879768    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:36.879817    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:36.890635    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:36.890652    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:36.890658    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:36.925641    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:36.925653    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:36.937231    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:36.937242    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:36.953035    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:36.953048    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:36.964937    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:36.964948    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:36.978379    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:36.978390    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:36.994181    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:36.994192    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:37.033324    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:37.033336    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:37.070990    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:37.071004    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:37.085685    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:37.085694    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:37.103339    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:37.103354    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:37.124682    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:37.124693    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:37.151055    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:37.151077    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:37.156015    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:37.156030    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:37.178549    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:37.178564    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:37.195331    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:37.195346    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:37.206841    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:37.206852    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:39.721771    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:44.723917    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:44.724074    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:16:44.740370    4843 logs.go:276] 2 containers: [3d0abd29bf43 255915f3e59c]
	I0725 11:16:44.740437    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:16:44.751384    4843 logs.go:276] 2 containers: [f0432e1e6aca 84ce05051b4f]
	I0725 11:16:44.751452    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:16:44.761699    4843 logs.go:276] 1 containers: [9f92afa246bd]
	I0725 11:16:44.761765    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:16:44.775183    4843 logs.go:276] 2 containers: [9862216e7265 10b2277d1125]
	I0725 11:16:44.775257    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:16:44.785788    4843 logs.go:276] 1 containers: [7cabcd4816f9]
	I0725 11:16:44.785859    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:16:44.796703    4843 logs.go:276] 2 containers: [ca785038db99 42523f7ee731]
	I0725 11:16:44.796773    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:16:44.807612    4843 logs.go:276] 0 containers: []
	W0725 11:16:44.807626    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:16:44.807688    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:16:44.823261    4843 logs.go:276] 2 containers: [e9945804ef2b 476a57322522]
	I0725 11:16:44.823278    4843 logs.go:123] Gathering logs for kube-proxy [7cabcd4816f9] ...
	I0725 11:16:44.823283    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cabcd4816f9"
	I0725 11:16:44.834650    4843 logs.go:123] Gathering logs for storage-provisioner [e9945804ef2b] ...
	I0725 11:16:44.834664    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9945804ef2b"
	I0725 11:16:44.846194    4843 logs.go:123] Gathering logs for kube-apiserver [255915f3e59c] ...
	I0725 11:16:44.846206    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 255915f3e59c"
	I0725 11:16:44.884618    4843 logs.go:123] Gathering logs for etcd [84ce05051b4f] ...
	I0725 11:16:44.884629    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84ce05051b4f"
	I0725 11:16:44.931753    4843 logs.go:123] Gathering logs for kube-controller-manager [42523f7ee731] ...
	I0725 11:16:44.931765    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42523f7ee731"
	I0725 11:16:44.948457    4843 logs.go:123] Gathering logs for storage-provisioner [476a57322522] ...
	I0725 11:16:44.948470    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 476a57322522"
	I0725 11:16:44.963753    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:16:44.963763    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:16:44.968292    4843 logs.go:123] Gathering logs for kube-apiserver [3d0abd29bf43] ...
	I0725 11:16:44.968299    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d0abd29bf43"
	I0725 11:16:44.981854    4843 logs.go:123] Gathering logs for etcd [f0432e1e6aca] ...
	I0725 11:16:44.981864    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f0432e1e6aca"
	I0725 11:16:44.998152    4843 logs.go:123] Gathering logs for coredns [9f92afa246bd] ...
	I0725 11:16:44.998162    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f92afa246bd"
	I0725 11:16:45.010068    4843 logs.go:123] Gathering logs for kube-scheduler [9862216e7265] ...
	I0725 11:16:45.010080    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9862216e7265"
	I0725 11:16:45.021932    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:16:45.021943    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:16:45.060142    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:16:45.060150    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:16:45.094917    4843 logs.go:123] Gathering logs for kube-scheduler [10b2277d1125] ...
	I0725 11:16:45.094927    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10b2277d1125"
	I0725 11:16:45.110436    4843 logs.go:123] Gathering logs for kube-controller-manager [ca785038db99] ...
	I0725 11:16:45.110447    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca785038db99"
	I0725 11:16:45.129284    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:16:45.129294    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:16:45.152670    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:16:45.152679    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:16:47.667542    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:16:52.669276    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:16:52.669337    4843 kubeadm.go:597] duration metric: took 4m4.208924833s to restartPrimaryControlPlane
	W0725 11:16:52.669395    4843 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0725 11:16:52.669420    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0725 11:16:53.696263    4843 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026864959s)
	I0725 11:16:53.696334    4843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 11:16:53.701267    4843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 11:16:53.704215    4843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 11:16:53.706756    4843 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 11:16:53.706763    4843 kubeadm.go:157] found existing configuration files:
	
	I0725 11:16:53.706786    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf
	I0725 11:16:53.709109    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 11:16:53.709129    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 11:16:53.711796    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf
	I0725 11:16:53.714187    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 11:16:53.714207    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 11:16:53.717093    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf
	I0725 11:16:53.720122    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 11:16:53.720140    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 11:16:53.722688    4843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf
	I0725 11:16:53.725991    4843 kubeadm.go:163] "https://control-plane.minikube.internal:50498" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50498 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 11:16:53.726012    4843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 11:16:53.729060    4843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0725 11:16:53.746444    4843 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0725 11:16:53.746473    4843 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 11:16:53.798985    4843 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 11:16:53.799046    4843 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 11:16:53.799095    4843 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 11:16:53.847452    4843 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 11:16:53.852617    4843 out.go:204]   - Generating certificates and keys ...
	I0725 11:16:53.852649    4843 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 11:16:53.852678    4843 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 11:16:53.852728    4843 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0725 11:16:53.852761    4843 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0725 11:16:53.852797    4843 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0725 11:16:53.852823    4843 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0725 11:16:53.852850    4843 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0725 11:16:53.852878    4843 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0725 11:16:53.852914    4843 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0725 11:16:53.852949    4843 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0725 11:16:53.852966    4843 kubeadm.go:310] [certs] Using the existing "sa" key
	I0725 11:16:53.852994    4843 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 11:16:53.950168    4843 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 11:16:54.094803    4843 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 11:16:54.187130    4843 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 11:16:54.238628    4843 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 11:16:54.269654    4843 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 11:16:54.270024    4843 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 11:16:54.270056    4843 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 11:16:54.351783    4843 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 11:16:54.356044    4843 out.go:204]   - Booting up control plane ...
	I0725 11:16:54.356093    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 11:16:54.356131    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 11:16:54.356170    4843 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 11:16:54.356213    4843 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 11:16:54.356320    4843 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0725 11:16:58.857439    4843 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501506 seconds
	I0725 11:16:58.857540    4843 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 11:16:58.863323    4843 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 11:16:59.372241    4843 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 11:16:59.372354    4843 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-820000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 11:16:59.875850    4843 kubeadm.go:310] [bootstrap-token] Using token: m6opb0.4rgq96igybzj768v
	I0725 11:16:59.881421    4843 out.go:204]   - Configuring RBAC rules ...
	I0725 11:16:59.881487    4843 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 11:16:59.881538    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 11:16:59.884983    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 11:16:59.885951    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 11:16:59.886854    4843 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 11:16:59.887881    4843 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 11:16:59.891026    4843 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 11:17:00.056608    4843 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 11:17:00.283759    4843 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 11:17:00.283771    4843 kubeadm.go:310] 
	I0725 11:17:00.283799    4843 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 11:17:00.283801    4843 kubeadm.go:310] 
	I0725 11:17:00.283885    4843 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 11:17:00.283890    4843 kubeadm.go:310] 
	I0725 11:17:00.283902    4843 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 11:17:00.283929    4843 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 11:17:00.283963    4843 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 11:17:00.283968    4843 kubeadm.go:310] 
	I0725 11:17:00.284008    4843 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 11:17:00.284018    4843 kubeadm.go:310] 
	I0725 11:17:00.284052    4843 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 11:17:00.284060    4843 kubeadm.go:310] 
	I0725 11:17:00.284098    4843 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 11:17:00.284160    4843 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 11:17:00.284200    4843 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 11:17:00.284203    4843 kubeadm.go:310] 
	I0725 11:17:00.284311    4843 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 11:17:00.284461    4843 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 11:17:00.284468    4843 kubeadm.go:310] 
	I0725 11:17:00.284593    4843 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m6opb0.4rgq96igybzj768v \
	I0725 11:17:00.284651    4843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 \
	I0725 11:17:00.284662    4843 kubeadm.go:310] 	--control-plane 
	I0725 11:17:00.284665    4843 kubeadm.go:310] 
	I0725 11:17:00.284703    4843 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 11:17:00.284706    4843 kubeadm.go:310] 
	I0725 11:17:00.284747    4843 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m6opb0.4rgq96igybzj768v \
	I0725 11:17:00.284799    4843 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:366b8493affb212c72b1809ab4c29298aab67cf1036f6f3313061b6e6baa4fa5 
	I0725 11:17:00.285019    4843 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 11:17:00.285244    4843 cni.go:84] Creating CNI manager for ""
	I0725 11:17:00.285268    4843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:17:00.288319    4843 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0725 11:17:00.295389    4843 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0725 11:17:00.298306    4843 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0725 11:17:00.302914    4843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 11:17:00.302959    4843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 11:17:00.302960    4843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-820000 minikube.k8s.io/updated_at=2024_07_25T11_17_00_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=stopped-upgrade-820000 minikube.k8s.io/primary=true
	I0725 11:17:00.306109    4843 ops.go:34] apiserver oom_adj: -16
	I0725 11:17:00.353044    4843 kubeadm.go:1113] duration metric: took 50.123042ms to wait for elevateKubeSystemPrivileges
	I0725 11:17:00.353057    4843 kubeadm.go:394] duration metric: took 4m11.908477125s to StartCluster
	I0725 11:17:00.353067    4843 settings.go:142] acquiring lock: {Name:mk9c0f6a74d3ffd78a971cee1d6827e5c0e0b5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:17:00.353152    4843 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:17:00.353541    4843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/kubeconfig: {Name:mkc10f7ed093884fc8129fa2ab95ce544a51f269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:17:00.353746    4843 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:17:00.353760    4843 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 11:17:00.353791    4843 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-820000"
	I0725 11:17:00.353805    4843 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-820000"
	W0725 11:17:00.353809    4843 addons.go:243] addon storage-provisioner should already be in state true
	I0725 11:17:00.353813    4843 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-820000"
	I0725 11:17:00.353820    4843 host.go:66] Checking if "stopped-upgrade-820000" exists ...
	I0725 11:17:00.353824    4843 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-820000"
	I0725 11:17:00.353833    4843 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:17:00.354993    4843 kapi.go:59] client config for stopped-upgrade-820000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/stopped-upgrade-820000/client.key", CAFile:"/Users/jenkins/minikube-integration/19326-1196/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x106493fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0725 11:17:00.355105    4843 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-820000"
	W0725 11:17:00.355109    4843 addons.go:243] addon default-storageclass should already be in state true
	I0725 11:17:00.355116    4843 host.go:66] Checking if "stopped-upgrade-820000" exists ...
	I0725 11:17:00.358161    4843 out.go:177] * Verifying Kubernetes components...
	I0725 11:17:00.358488    4843 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 11:17:00.362181    4843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 11:17:00.362190    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:17:00.368034    4843 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 11:17:00.374062    4843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 11:17:00.377064    4843 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:17:00.377071    4843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 11:17:00.377079    4843 sshutil.go:53] new ssh client: &{IP:localhost Port:50463 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/stopped-upgrade-820000/id_rsa Username:docker}
	I0725 11:17:00.467255    4843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 11:17:00.473954    4843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 11:17:00.475409    4843 api_server.go:52] waiting for apiserver process to appear ...
	I0725 11:17:00.475435    4843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 11:17:00.536444    4843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 11:17:00.807733    4843 api_server.go:72] duration metric: took 453.988625ms to wait for apiserver process to appear ...
	I0725 11:17:00.807747    4843 api_server.go:88] waiting for apiserver healthz status ...
	I0725 11:17:00.807756    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:05.809519    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:05.809571    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:10.809692    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:10.809710    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:15.809872    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:15.809927    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:20.810160    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:20.810209    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:25.810571    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:25.810609    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0725 11:17:30.809035    4843 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0725 11:17:30.811067    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:30.811086    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:30.813375    4843 out.go:177] * Enabled addons: storage-provisioner
	I0725 11:17:30.824243    4843 addons.go:510] duration metric: took 30.471482208s for enable addons: enabled=[storage-provisioner]
	I0725 11:17:35.811772    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:35.811823    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:40.812693    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:40.812715    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:45.814082    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:45.814143    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:50.815303    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:50.815334    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:17:55.817205    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:17:55.817243    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:00.819477    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:00.819814    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:00.848343    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:00.848465    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:00.865589    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:00.865667    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:00.878813    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:00.878889    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:00.893709    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:00.893780    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:00.905926    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:00.905993    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:00.916115    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:00.916179    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:00.926902    4843 logs.go:276] 0 containers: []
	W0725 11:18:00.926912    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:00.926961    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:00.937968    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:00.937983    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:00.937987    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:00.953226    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:00.953237    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:00.965726    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:00.965738    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:00.983022    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:00.983035    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:00.996949    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:00.996958    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:01.031953    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:01.031961    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:01.043445    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:01.043456    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:01.057686    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:01.057695    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:01.072389    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:01.072401    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:01.084118    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:01.084128    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:01.095847    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:01.095857    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:01.118779    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:01.118786    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:01.122780    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:01.122785    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:03.663193    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:08.665536    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:08.665749    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:08.687873    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:08.688011    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:08.703423    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:08.703500    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:08.716512    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:08.716579    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:08.727546    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:08.727613    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:08.737775    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:08.737843    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:08.748000    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:08.748068    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:08.758419    4843 logs.go:276] 0 containers: []
	W0725 11:18:08.758431    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:08.758482    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:08.769186    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:08.769201    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:08.769206    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:08.773975    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:08.773984    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:08.788142    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:08.788154    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:08.802899    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:08.802909    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:08.820190    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:08.820203    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:08.831158    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:08.831169    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:08.842352    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:08.842364    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:08.854347    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:08.854360    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:08.879002    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:08.879010    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:08.912688    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:08.912698    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:08.956997    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:08.957010    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:08.971161    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:08.971171    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:08.984281    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:08.984292    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:11.497988    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:16.500769    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:16.501248    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:16.552317    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:16.552451    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:16.570700    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:16.570776    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:16.584319    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:16.584400    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:16.596090    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:16.596160    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:16.608383    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:16.608454    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:16.624354    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:16.624421    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:16.635167    4843 logs.go:276] 0 containers: []
	W0725 11:18:16.635179    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:16.635232    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:16.645726    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:16.645740    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:16.645745    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:16.681184    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:16.681195    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:16.698142    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:16.698156    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:16.710666    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:16.710675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:16.725079    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:16.725090    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:16.737106    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:16.737119    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:16.755636    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:16.755649    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:16.778470    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:16.778478    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:16.789509    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:16.789520    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:16.823662    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:16.823672    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:16.827764    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:16.827771    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:16.841601    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:16.841612    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:16.853479    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:16.853490    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:19.375237    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:24.377867    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:24.378205    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:24.409085    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:24.409206    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:24.427417    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:24.427507    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:24.441287    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:24.441369    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:24.452680    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:24.452752    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:24.462960    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:24.463026    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:24.473936    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:24.473993    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:24.484027    4843 logs.go:276] 0 containers: []
	W0725 11:18:24.484039    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:24.484091    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:24.494006    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:24.494021    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:24.494027    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:24.506064    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:24.506073    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:24.517159    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:24.517181    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:24.521446    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:24.521455    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:24.562490    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:24.562505    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:24.574159    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:24.574168    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:24.586091    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:24.586100    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:24.604313    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:24.604323    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:24.625092    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:24.625101    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:24.636661    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:24.636673    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:24.660300    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:24.660306    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:24.692284    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:24.692290    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:24.706238    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:24.706249    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:27.222242    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:32.224862    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:32.225077    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:32.251611    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:32.251725    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:32.266904    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:32.266977    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:32.279232    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:32.279313    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:32.290449    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:32.290514    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:32.300980    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:32.301032    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:32.311461    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:32.311533    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:32.321581    4843 logs.go:276] 0 containers: []
	W0725 11:18:32.321593    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:32.321642    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:32.332006    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:32.332024    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:32.332029    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:32.356743    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:32.356753    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:32.368189    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:32.368199    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:32.372485    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:32.372495    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:32.406317    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:32.406328    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:32.420295    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:32.420308    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:32.433543    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:32.433556    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:32.444886    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:32.444899    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:32.456186    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:32.456199    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:32.471434    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:32.471447    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:32.483487    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:32.483498    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:32.515829    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:32.515840    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:32.527347    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:32.527358    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:35.046633    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:40.049300    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:40.049703    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:40.091271    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:40.091409    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:40.113009    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:40.113112    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:40.127499    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:40.127570    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:40.141019    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:40.141086    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:40.152038    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:40.152099    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:40.162911    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:40.162976    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:40.173331    4843 logs.go:276] 0 containers: []
	W0725 11:18:40.173341    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:40.173393    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:40.183733    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:40.183747    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:40.183752    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:40.187895    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:40.187903    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:40.225811    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:40.225826    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:40.240546    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:40.240560    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:40.258592    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:40.258602    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:40.270207    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:40.270216    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:40.296376    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:40.296397    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:40.331618    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:40.331629    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:40.345750    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:40.345763    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:40.357907    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:40.357921    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:40.369817    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:40.369827    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:40.384265    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:40.384277    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:40.396472    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:40.396483    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:42.909507    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:47.912180    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:47.912639    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:47.964974    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:47.965082    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:47.984367    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:47.984447    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:48.002149    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:48.002219    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:48.015941    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:48.016012    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:48.026529    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:48.026595    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:48.037760    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:48.037827    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:48.049341    4843 logs.go:276] 0 containers: []
	W0725 11:18:48.049354    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:48.049427    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:48.060737    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:48.060752    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:48.060757    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:48.095478    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:48.095490    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:48.109334    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:48.109346    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:48.124755    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:48.124767    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:48.138679    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:48.138691    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:48.151418    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:48.151432    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:48.171544    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:48.171557    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:48.205209    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:48.205219    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:48.210666    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:48.210675    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:48.222111    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:48.222122    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:48.246849    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:48.246857    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:48.258033    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:48.258044    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:48.275100    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:48.275111    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:50.795341    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:18:55.796915    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:18:55.797349    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:18:55.835487    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:18:55.835610    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:18:55.856331    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:18:55.856444    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:18:55.871177    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:18:55.871252    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:18:55.884578    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:18:55.884652    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:18:55.895215    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:18:55.895281    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:18:55.906643    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:18:55.906702    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:18:55.916903    4843 logs.go:276] 0 containers: []
	W0725 11:18:55.916917    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:18:55.916975    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:18:55.927829    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:18:55.927844    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:18:55.927849    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:18:55.963061    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:18:55.963076    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:18:55.980658    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:18:55.980670    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:18:55.992422    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:18:55.992434    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:18:56.017330    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:18:56.017336    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:18:56.035207    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:18:56.035218    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:18:56.046867    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:18:56.046880    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:18:56.079323    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:18:56.079330    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:18:56.083146    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:18:56.083154    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:18:56.097286    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:18:56.097299    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:18:56.117895    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:18:56.117908    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:18:56.130467    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:18:56.130475    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:18:56.145305    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:18:56.145317    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:18:58.663588    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:03.666216    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:03.666555    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:03.700160    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:03.700262    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:03.721522    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:03.721606    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:03.738170    4843 logs.go:276] 2 containers: [b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:03.738238    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:03.749857    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:03.749918    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:03.762587    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:03.762657    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:03.772447    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:03.772506    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:03.782508    4843 logs.go:276] 0 containers: []
	W0725 11:19:03.782519    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:03.782574    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:03.793140    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:03.793154    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:03.793159    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:03.807641    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:03.807652    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:03.825331    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:03.825341    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:03.836489    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:03.836498    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:03.870884    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:03.870894    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:03.904514    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:03.904525    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:03.918813    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:03.918827    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:03.929981    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:03.929990    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:03.954109    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:03.954120    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:03.965323    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:03.965333    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:03.969810    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:03.969818    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:03.984178    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:03.984187    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:03.995222    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:03.995236    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:06.509499    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:11.511542    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:11.511650    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:11.524202    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:11.524260    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:11.535064    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:11.535132    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:11.546105    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:11.546165    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:11.556872    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:11.556921    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:11.567566    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:11.567624    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:11.578007    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:11.578072    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:11.588483    4843 logs.go:276] 0 containers: []
	W0725 11:19:11.588497    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:11.588545    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:11.598928    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:11.598947    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:11.598952    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:11.631533    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:11.631544    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:11.666902    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:11.666917    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:11.683501    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:11.683513    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:11.695327    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:11.695338    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:11.706780    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:11.706792    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:11.730368    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:11.730376    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:11.741689    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:11.741702    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:11.755556    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:11.755566    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:11.772864    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:11.772873    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:11.784603    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:11.784615    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:11.796778    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:11.796790    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:11.809142    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:11.809158    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:11.813789    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:11.813798    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:11.825184    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:11.825196    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:14.347348    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:19.349685    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:19.350105    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:19.392273    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:19.392403    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:19.412738    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:19.412822    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:19.428187    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:19.428261    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:19.441242    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:19.441310    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:19.452550    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:19.452610    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:19.463228    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:19.463284    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:19.473576    4843 logs.go:276] 0 containers: []
	W0725 11:19:19.473585    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:19.473635    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:19.484174    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:19.484190    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:19.484196    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:19.497867    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:19.497882    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:19.514973    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:19.514983    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:19.527340    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:19.527352    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:19.547333    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:19.547345    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:19.560696    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:19.560706    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:19.573292    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:19.573303    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:19.585122    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:19.585132    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:19.610423    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:19.610429    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:19.645159    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:19.645168    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:19.649662    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:19.649668    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:19.665299    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:19.665314    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:19.681333    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:19.681345    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:19.719140    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:19.719151    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:19.731909    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:19.731923    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:22.245989    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:27.248116    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:27.248551    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:27.284433    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:27.284596    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:27.306114    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:27.306214    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:27.322399    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:27.322473    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:27.334289    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:27.334359    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:27.345088    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:27.345154    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:27.356028    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:27.356100    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:27.369993    4843 logs.go:276] 0 containers: []
	W0725 11:19:27.370004    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:27.370061    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:27.380888    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:27.380905    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:27.380910    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:27.415689    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:27.415697    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:27.439895    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:27.439906    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:27.454391    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:27.454401    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:27.473768    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:27.473780    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:27.492114    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:27.492128    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:27.509966    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:27.509977    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:27.522250    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:27.522261    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:27.526934    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:27.526943    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:27.563716    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:27.563730    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:27.578073    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:27.578085    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:27.590164    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:27.590175    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:27.602058    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:27.602068    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:27.615711    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:27.615721    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:27.628054    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:27.628066    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:30.141984    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:35.144568    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:35.144954    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:35.183646    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:35.183772    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:35.206488    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:35.206589    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:35.223082    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:35.223153    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:35.235814    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:35.235881    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:35.246632    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:35.246693    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:35.257650    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:35.257706    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:35.267865    4843 logs.go:276] 0 containers: []
	W0725 11:19:35.267875    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:35.267931    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:35.278765    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:35.278784    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:35.278789    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:35.302808    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:35.302818    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:35.336043    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:35.336055    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:35.350692    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:35.350702    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:35.362620    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:35.362632    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:35.373933    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:35.373943    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:35.388646    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:35.388658    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:35.400238    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:35.400251    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:35.411602    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:35.411615    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:35.424774    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:35.424788    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:35.448502    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:35.448511    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:35.465084    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:35.465098    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:35.499677    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:35.499685    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:35.503797    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:35.503805    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:35.515323    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:35.515333    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:38.029402    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:43.032118    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:43.032577    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:43.070234    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:43.070358    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:43.095780    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:43.095864    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:43.110206    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:43.110272    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:43.122300    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:43.122360    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:43.133079    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:43.133143    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:43.143413    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:43.143468    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:43.154945    4843 logs.go:276] 0 containers: []
	W0725 11:19:43.154955    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:43.155000    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:43.165388    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:43.165407    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:43.165412    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:43.169665    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:43.169672    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:43.203744    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:43.203753    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:43.215581    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:43.215596    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:43.229966    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:43.229976    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:43.241743    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:43.241752    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:43.265346    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:43.265356    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:43.282959    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:43.282971    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:43.294129    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:43.294140    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:43.309948    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:43.309961    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:43.325248    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:43.325262    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:43.336869    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:43.336882    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:43.354090    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:43.354100    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:43.367150    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:43.367163    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:43.382007    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:43.382019    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:45.917434    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:50.919522    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:50.919945    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:50.953529    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:50.953657    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:50.972497    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:50.972599    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:50.987365    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:50.987440    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:50.998916    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:50.998981    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:51.009732    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:51.009796    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:51.020061    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:51.020131    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:51.030339    4843 logs.go:276] 0 containers: []
	W0725 11:19:51.030348    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:51.030397    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:51.040826    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:51.040842    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:51.040849    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:51.075936    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:51.075944    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:19:51.087608    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:51.087621    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:51.104129    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:51.104142    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:51.119489    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:51.119499    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:51.133469    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:51.133480    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:51.149021    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:51.149032    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:51.174333    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:51.174339    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:51.188709    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:51.188720    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:51.200363    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:51.200377    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:51.211894    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:51.211905    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:51.223840    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:51.223849    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:51.241105    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:51.241114    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:51.245776    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:51.245782    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:51.278823    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:51.278834    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:53.791528    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:19:58.793006    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:19:58.793224    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:19:58.810182    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:19:58.810257    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:19:58.823374    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:19:58.823455    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:19:58.836811    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:19:58.836890    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:19:58.847095    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:19:58.847156    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:19:58.857523    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:19:58.857581    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:19:58.870703    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:19:58.870761    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:19:58.880694    4843 logs.go:276] 0 containers: []
	W0725 11:19:58.880703    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:19:58.880745    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:19:58.891044    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:19:58.891062    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:19:58.891068    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:19:58.902779    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:19:58.902794    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:19:58.916565    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:19:58.916577    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:19:58.934182    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:19:58.934195    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:19:58.948313    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:19:58.948325    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:19:58.962624    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:19:58.962633    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:19:58.985822    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:19:58.985830    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:19:58.997043    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:19:58.997056    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:19:59.030647    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:19:59.030660    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:19:59.042433    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:19:59.042442    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:19:59.053481    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:19:59.053489    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:19:59.071818    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:19:59.071828    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:19:59.084184    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:19:59.084197    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:19:59.116994    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:19:59.117001    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:19:59.120986    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:19:59.120995    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:01.634394    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:06.635379    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:06.635777    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:06.674844    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:06.674981    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:06.693875    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:06.693955    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:06.709899    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:06.709973    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:06.721231    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:06.721299    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:06.732395    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:06.732452    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:06.743056    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:06.743122    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:06.757712    4843 logs.go:276] 0 containers: []
	W0725 11:20:06.757724    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:06.757770    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:06.768159    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:06.768181    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:06.768186    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:06.780139    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:06.780149    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:06.797815    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:06.797826    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:06.809787    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:06.809798    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:06.823813    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:06.823823    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:06.838447    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:06.838460    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:06.849528    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:06.849537    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:06.853767    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:06.853776    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:06.887127    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:06.887137    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:06.901688    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:06.901699    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:06.925334    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:06.925342    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:06.958376    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:06.958385    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:06.973187    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:06.973199    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:06.986051    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:06.986063    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:06.999857    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:06.999870    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:09.513454    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:14.515564    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:14.515996    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:14.556818    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:14.556953    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:14.578729    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:14.578817    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:14.594338    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:14.594412    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:14.607421    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:14.607489    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:14.620786    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:14.620860    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:14.631941    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:14.632002    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:14.642723    4843 logs.go:276] 0 containers: []
	W0725 11:20:14.642736    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:14.642796    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:14.654139    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:14.654155    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:14.654160    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:14.688008    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:14.688017    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:14.730233    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:14.730245    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:14.742218    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:14.742230    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:14.755398    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:14.755411    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:14.773424    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:14.773438    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:14.791715    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:14.791729    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:14.803554    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:14.803567    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:14.815302    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:14.815311    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:14.833540    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:14.833550    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:14.845427    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:14.845438    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:14.857672    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:14.857681    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:14.861844    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:14.861853    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:14.875929    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:14.875939    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:14.891857    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:14.891866    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:17.422451    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:22.422675    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:22.422783    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:22.436348    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:22.436405    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:22.446876    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:22.446941    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:22.457962    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:22.458030    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:22.469253    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:22.469316    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:22.484110    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:22.484169    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:22.495663    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:22.495713    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:22.507286    4843 logs.go:276] 0 containers: []
	W0725 11:20:22.507297    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:22.507352    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:22.518663    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:22.518680    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:22.518685    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:22.522876    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:22.522882    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:22.534865    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:22.534877    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:22.546032    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:22.546043    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:22.580357    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:22.580365    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:22.594425    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:22.594435    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:22.606065    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:22.606075    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:22.620043    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:22.620054    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:22.634242    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:22.634251    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:22.667942    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:22.667953    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:22.682263    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:22.682274    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:22.699555    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:22.699565    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:22.711081    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:22.711091    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:22.735670    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:22.735677    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:22.750723    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:22.750732    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:25.264808    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:30.267576    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:30.267958    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:30.307426    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:30.307551    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:30.328675    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:30.328758    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:30.343515    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:30.343598    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:30.356182    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:30.356238    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:30.366785    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:30.366848    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:30.377232    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:30.377298    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:30.394396    4843 logs.go:276] 0 containers: []
	W0725 11:20:30.394408    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:30.394458    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:30.405231    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:30.405250    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:30.405255    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:30.420102    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:30.420114    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:30.432509    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:30.432523    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:30.444246    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:30.444258    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:30.469455    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:30.469465    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:30.503783    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:30.503793    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:30.508346    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:30.508355    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:30.527614    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:30.527625    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:30.539214    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:30.539227    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:30.574244    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:30.574259    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:30.585609    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:30.585624    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:30.597709    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:30.597719    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:30.609645    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:30.609659    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:30.626666    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:30.626677    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:30.639108    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:30.639121    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:33.164070    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:38.166121    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:38.166407    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:38.209522    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:38.209608    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:38.227209    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:38.227288    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:38.241019    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:38.241081    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:38.251329    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:38.251394    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:38.261964    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:38.262037    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:38.272591    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:38.272649    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:38.282936    4843 logs.go:276] 0 containers: []
	W0725 11:20:38.282948    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:38.283001    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:38.294454    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:38.294473    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:38.294491    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:38.329530    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:38.329543    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:38.344607    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:38.344618    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:38.358845    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:38.358859    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:38.370716    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:38.370732    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:38.385466    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:38.385476    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:38.398094    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:38.398104    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:38.432652    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:38.432663    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:38.443666    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:38.443678    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:38.456879    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:38.456894    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:38.474071    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:38.474080    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:38.485466    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:38.485478    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:38.508756    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:38.508767    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:38.513110    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:38.513118    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:38.524893    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:38.524906    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:41.039036    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:46.039730    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:46.039806    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:46.051604    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:46.051664    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:46.062601    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:46.062662    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:46.074049    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:46.074103    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:46.086213    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:46.086265    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:46.096773    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:46.096829    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:46.108861    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:46.108925    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:46.120533    4843 logs.go:276] 0 containers: []
	W0725 11:20:46.120547    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:46.120598    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:46.132244    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:46.132260    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:46.132265    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:46.167215    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:46.167232    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:46.182785    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:46.182792    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:46.206180    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:46.206192    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:46.225867    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:46.225880    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:46.230437    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:46.230448    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:46.245341    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:46.245354    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:46.258071    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:46.258082    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:46.270338    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:46.270349    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:46.281845    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:46.281856    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:46.294689    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:46.294700    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:46.307329    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:46.307341    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:46.322903    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:46.322914    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:46.339768    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:46.339778    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:46.377853    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:46.377865    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:48.891019    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:20:53.893208    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:20:53.893326    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0725 11:20:53.907766    4843 logs.go:276] 1 containers: [f7e86f7d4929]
	I0725 11:20:53.907836    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0725 11:20:53.920337    4843 logs.go:276] 1 containers: [cf1e8a0bfc8d]
	I0725 11:20:53.920403    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0725 11:20:53.933161    4843 logs.go:276] 4 containers: [52afae889528 a19b8ef3108d b3c135a74fc0 b6cc13d45f9d]
	I0725 11:20:53.933239    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0725 11:20:53.945484    4843 logs.go:276] 1 containers: [7817f4096eea]
	I0725 11:20:53.945550    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0725 11:20:53.966289    4843 logs.go:276] 1 containers: [dc9e05da9433]
	I0725 11:20:53.966350    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0725 11:20:53.977590    4843 logs.go:276] 1 containers: [5b133bdf7174]
	I0725 11:20:53.977653    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0725 11:20:53.988510    4843 logs.go:276] 0 containers: []
	W0725 11:20:53.988523    4843 logs.go:278] No container was found matching "kindnet"
	I0725 11:20:53.988571    4843 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0725 11:20:53.999786    4843 logs.go:276] 1 containers: [5a7b0dc09ffb]
	I0725 11:20:53.999807    4843 logs.go:123] Gathering logs for kubelet ...
	I0725 11:20:53.999813    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0725 11:20:54.033998    4843 logs.go:123] Gathering logs for dmesg ...
	I0725 11:20:54.034006    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 11:20:54.038135    4843 logs.go:123] Gathering logs for coredns [b6cc13d45f9d] ...
	I0725 11:20:54.038144    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6cc13d45f9d"
	I0725 11:20:54.050909    4843 logs.go:123] Gathering logs for storage-provisioner [5a7b0dc09ffb] ...
	I0725 11:20:54.050920    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a7b0dc09ffb"
	I0725 11:20:54.062779    4843 logs.go:123] Gathering logs for Docker ...
	I0725 11:20:54.062790    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0725 11:20:54.087951    4843 logs.go:123] Gathering logs for coredns [52afae889528] ...
	I0725 11:20:54.087960    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52afae889528"
	I0725 11:20:54.099820    4843 logs.go:123] Gathering logs for coredns [b3c135a74fc0] ...
	I0725 11:20:54.099830    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3c135a74fc0"
	I0725 11:20:54.111430    4843 logs.go:123] Gathering logs for kube-proxy [dc9e05da9433] ...
	I0725 11:20:54.111441    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9e05da9433"
	I0725 11:20:54.123365    4843 logs.go:123] Gathering logs for describe nodes ...
	I0725 11:20:54.123377    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 11:20:54.162689    4843 logs.go:123] Gathering logs for etcd [cf1e8a0bfc8d] ...
	I0725 11:20:54.162701    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf1e8a0bfc8d"
	I0725 11:20:54.177176    4843 logs.go:123] Gathering logs for kube-apiserver [f7e86f7d4929] ...
	I0725 11:20:54.177187    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7e86f7d4929"
	I0725 11:20:54.192104    4843 logs.go:123] Gathering logs for coredns [a19b8ef3108d] ...
	I0725 11:20:54.192114    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a19b8ef3108d"
	I0725 11:20:54.204078    4843 logs.go:123] Gathering logs for kube-scheduler [7817f4096eea] ...
	I0725 11:20:54.204088    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7817f4096eea"
	I0725 11:20:54.222101    4843 logs.go:123] Gathering logs for kube-controller-manager [5b133bdf7174] ...
	I0725 11:20:54.222111    4843 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b133bdf7174"
	I0725 11:20:54.248410    4843 logs.go:123] Gathering logs for container status ...
	I0725 11:20:54.248421    4843 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 11:20:56.762298    4843 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0725 11:21:01.764459    4843 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0725 11:21:01.770785    4843 out.go:177] 
	W0725 11:21:01.782099    4843 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0725 11:21:01.782146    4843 out.go:239] * 
	* 
	W0725 11:21:01.784983    4843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:01.797773    4843 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-820000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.87s)

                                                
                                    
x
+
TestPause/serial/Start (9.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.8096375s)

                                                
                                                
-- stdout --
	* [pause-938000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-938000" primary control-plane node in "pause-938000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-938000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-938000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-938000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-938000 -n pause-938000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-938000 -n pause-938000: exit status 7 (44.307083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-938000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 : exit status 80 (9.916932417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-007000" primary control-plane node in "NoKubernetes-007000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-007000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-007000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000: exit status 7 (51.069125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-007000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 : exit status 80 (5.234083375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-007000
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-007000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000: exit status 7 (33.68725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-007000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 : exit status 80 (5.250461708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-007000
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-007000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000: exit status 7 (70.588208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-007000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 : exit status 80 (5.29062925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-007000
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-007000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-007000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-007000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-007000 -n NoKubernetes-007000: exit status 7 (62.709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-007000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0725 11:19:15.203711    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.85531s)

                                                
                                                
-- stdout --
	* [auto-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-411000" primary control-plane node in "auto-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:19:09.828908    5058 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:19:09.829042    5058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:09.829045    5058 out.go:304] Setting ErrFile to fd 2...
	I0725 11:19:09.829048    5058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:09.829194    5058 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:19:09.830237    5058 out.go:298] Setting JSON to false
	I0725 11:19:09.846764    5058 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4713,"bootTime":1721926836,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:19:09.846844    5058 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:19:09.851415    5058 out.go:177] * [auto-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:19:09.859493    5058 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:19:09.859546    5058 notify.go:220] Checking for updates...
	I0725 11:19:09.866444    5058 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:19:09.870315    5058 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:19:09.873417    5058 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:19:09.876459    5058 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:19:09.880331    5058 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:19:09.883732    5058 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:19:09.883799    5058 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:19:09.883845    5058 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:19:09.888464    5058 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:19:09.893441    5058 start.go:297] selected driver: qemu2
	I0725 11:19:09.893453    5058 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:19:09.893459    5058 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:19:09.895654    5058 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:19:09.898436    5058 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:19:09.901540    5058 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:19:09.901571    5058 cni.go:84] Creating CNI manager for ""
	I0725 11:19:09.901578    5058 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:19:09.901584    5058 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:19:09.901611    5058 start.go:340] cluster config:
	{Name:auto-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:19:09.905259    5058 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:19:09.912451    5058 out.go:177] * Starting "auto-411000" primary control-plane node in "auto-411000" cluster
	I0725 11:19:09.916410    5058 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:19:09.916427    5058 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:19:09.916437    5058 cache.go:56] Caching tarball of preloaded images
	I0725 11:19:09.916490    5058 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:19:09.916495    5058 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:19:09.916545    5058 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/auto-411000/config.json ...
	I0725 11:19:09.916555    5058 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/auto-411000/config.json: {Name:mkd80bd6bc81f6deb4a78b255fbb1ce3733e90c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:19:09.916878    5058 start.go:360] acquireMachinesLock for auto-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:09.916910    5058 start.go:364] duration metric: took 26µs to acquireMachinesLock for "auto-411000"
	I0725 11:19:09.916920    5058 start.go:93] Provisioning new machine with config: &{Name:auto-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:09.916956    5058 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:09.923429    5058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:09.940172    5058 start.go:159] libmachine.API.Create for "auto-411000" (driver="qemu2")
	I0725 11:19:09.940205    5058 client.go:168] LocalClient.Create starting
	I0725 11:19:09.940269    5058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:09.940298    5058 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:09.940306    5058 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:09.940346    5058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:09.940369    5058 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:09.940378    5058 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:09.940733    5058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:10.089229    5058 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:10.251455    5058 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:10.251465    5058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:10.251641    5058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:10.261531    5058 main.go:141] libmachine: STDOUT: 
	I0725 11:19:10.261553    5058 main.go:141] libmachine: STDERR: 
	I0725 11:19:10.261625    5058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2 +20000M
	I0725 11:19:10.270435    5058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:10.270453    5058 main.go:141] libmachine: STDERR: 
	I0725 11:19:10.270475    5058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:10.270480    5058 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:10.270495    5058 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:10.270517    5058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:c8:93:6a:20:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:10.272381    5058 main.go:141] libmachine: STDOUT: 
	I0725 11:19:10.272398    5058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:10.272414    5058 client.go:171] duration metric: took 332.214583ms to LocalClient.Create
	I0725 11:19:12.274522    5058 start.go:128] duration metric: took 2.357628541s to createHost
	I0725 11:19:12.274562    5058 start.go:83] releasing machines lock for "auto-411000", held for 2.357723209s
	W0725 11:19:12.274599    5058 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:12.284098    5058 out.go:177] * Deleting "auto-411000" in qemu2 ...
	W0725 11:19:12.302235    5058 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:12.302251    5058 start.go:729] Will try again in 5 seconds ...
	I0725 11:19:17.304268    5058 start.go:360] acquireMachinesLock for auto-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:17.304747    5058 start.go:364] duration metric: took 398.167µs to acquireMachinesLock for "auto-411000"
	I0725 11:19:17.304897    5058 start.go:93] Provisioning new machine with config: &{Name:auto-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:17.305202    5058 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:17.313817    5058 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:17.355290    5058 start.go:159] libmachine.API.Create for "auto-411000" (driver="qemu2")
	I0725 11:19:17.355332    5058 client.go:168] LocalClient.Create starting
	I0725 11:19:17.355430    5058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:17.355493    5058 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:17.355507    5058 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:17.355560    5058 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:17.355600    5058 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:17.355618    5058 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:17.356159    5058 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:17.516614    5058 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:17.603383    5058 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:17.603395    5058 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:17.603576    5058 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:17.613147    5058 main.go:141] libmachine: STDOUT: 
	I0725 11:19:17.613164    5058 main.go:141] libmachine: STDERR: 
	I0725 11:19:17.613223    5058 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2 +20000M
	I0725 11:19:17.621842    5058 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:17.621862    5058 main.go:141] libmachine: STDERR: 
	I0725 11:19:17.621873    5058 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:17.621879    5058 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:17.621889    5058 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:17.621914    5058 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:e4:bb:c1:3a:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/auto-411000/disk.qcow2
	I0725 11:19:17.623665    5058 main.go:141] libmachine: STDOUT: 
	I0725 11:19:17.623693    5058 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:17.623704    5058 client.go:171] duration metric: took 268.375708ms to LocalClient.Create
	I0725 11:19:19.625730    5058 start.go:128] duration metric: took 2.320591875s to createHost
	I0725 11:19:19.625742    5058 start.go:83] releasing machines lock for "auto-411000", held for 2.321042334s
	W0725 11:19:19.625799    5058 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:19.633098    5058 out.go:177] 
	W0725 11:19:19.636072    5058 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:19:19.636077    5058 out.go:239] * 
	* 
	W0725 11:19:19.636509    5058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:19:19.648997    5058 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.883600125s)

                                                
                                                
-- stdout --
	* [calico-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-411000" primary control-plane node in "calico-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:19:21.762803    5170 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:19:21.762941    5170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:21.762945    5170 out.go:304] Setting ErrFile to fd 2...
	I0725 11:19:21.762948    5170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:21.763098    5170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:19:21.764182    5170 out.go:298] Setting JSON to false
	I0725 11:19:21.780433    5170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4725,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:19:21.780496    5170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:19:21.785817    5170 out.go:177] * [calico-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:19:21.793709    5170 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:19:21.793811    5170 notify.go:220] Checking for updates...
	I0725 11:19:21.800729    5170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:19:21.803746    5170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:19:21.806744    5170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:19:21.809730    5170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:19:21.812720    5170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:19:21.816034    5170 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:19:21.816098    5170 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:19:21.816143    5170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:19:21.820724    5170 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:19:21.827744    5170 start.go:297] selected driver: qemu2
	I0725 11:19:21.827753    5170 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:19:21.827761    5170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:19:21.829927    5170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:19:21.832730    5170 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:19:21.835815    5170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:19:21.835856    5170 cni.go:84] Creating CNI manager for "calico"
	I0725 11:19:21.835861    5170 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0725 11:19:21.835899    5170 start.go:340] cluster config:
	{Name:calico-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:19:21.839167    5170 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:19:21.846791    5170 out.go:177] * Starting "calico-411000" primary control-plane node in "calico-411000" cluster
	I0725 11:19:21.850720    5170 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:19:21.850734    5170 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:19:21.850745    5170 cache.go:56] Caching tarball of preloaded images
	I0725 11:19:21.850801    5170 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:19:21.850806    5170 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:19:21.850878    5170 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/calico-411000/config.json ...
	I0725 11:19:21.850890    5170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/calico-411000/config.json: {Name:mkf9aee4db6ce53ec9a7027b0eee64dd00fe3ebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:19:21.851197    5170 start.go:360] acquireMachinesLock for calico-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:21.851226    5170 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "calico-411000"
	I0725 11:19:21.851236    5170 start.go:93] Provisioning new machine with config: &{Name:calico-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:21.851263    5170 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:21.859744    5170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:21.875414    5170 start.go:159] libmachine.API.Create for "calico-411000" (driver="qemu2")
	I0725 11:19:21.875444    5170 client.go:168] LocalClient.Create starting
	I0725 11:19:21.875522    5170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:21.875557    5170 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:21.875564    5170 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:21.875609    5170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:21.875631    5170 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:21.875638    5170 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:21.876100    5170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:22.027450    5170 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:22.200798    5170 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:22.200807    5170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:22.201016    5170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:22.210749    5170 main.go:141] libmachine: STDOUT: 
	I0725 11:19:22.210766    5170 main.go:141] libmachine: STDERR: 
	I0725 11:19:22.210837    5170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2 +20000M
	I0725 11:19:22.218699    5170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:22.218723    5170 main.go:141] libmachine: STDERR: 
	I0725 11:19:22.218742    5170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:22.218748    5170 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:22.218759    5170 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:22.218789    5170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:b8:05:88:80:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:22.220386    5170 main.go:141] libmachine: STDOUT: 
	I0725 11:19:22.220402    5170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:22.220422    5170 client.go:171] duration metric: took 344.986125ms to LocalClient.Create
	I0725 11:19:24.222565    5170 start.go:128] duration metric: took 2.3713515s to createHost
	I0725 11:19:24.222639    5170 start.go:83] releasing machines lock for "calico-411000", held for 2.371481166s
	W0725 11:19:24.222735    5170 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:24.233296    5170 out.go:177] * Deleting "calico-411000" in qemu2 ...
	W0725 11:19:24.260470    5170 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:24.260506    5170 start.go:729] Will try again in 5 seconds ...
	I0725 11:19:29.262535    5170 start.go:360] acquireMachinesLock for calico-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:29.263286    5170 start.go:364] duration metric: took 623.125µs to acquireMachinesLock for "calico-411000"
	I0725 11:19:29.263420    5170 start.go:93] Provisioning new machine with config: &{Name:calico-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:29.263721    5170 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:29.275403    5170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:29.327023    5170 start.go:159] libmachine.API.Create for "calico-411000" (driver="qemu2")
	I0725 11:19:29.327105    5170 client.go:168] LocalClient.Create starting
	I0725 11:19:29.327292    5170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:29.327367    5170 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:29.327381    5170 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:29.327453    5170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:29.327500    5170 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:29.327521    5170 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:29.328035    5170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:29.490843    5170 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:29.550312    5170 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:29.550318    5170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:29.550488    5170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:29.559903    5170 main.go:141] libmachine: STDOUT: 
	I0725 11:19:29.559916    5170 main.go:141] libmachine: STDERR: 
	I0725 11:19:29.559955    5170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2 +20000M
	I0725 11:19:29.567823    5170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:29.567844    5170 main.go:141] libmachine: STDERR: 
	I0725 11:19:29.567854    5170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:29.567859    5170 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:29.567866    5170 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:29.567903    5170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:27:77:af:09:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/calico-411000/disk.qcow2
	I0725 11:19:29.569556    5170 main.go:141] libmachine: STDOUT: 
	I0725 11:19:29.569576    5170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:29.569590    5170 client.go:171] duration metric: took 242.468708ms to LocalClient.Create
	I0725 11:19:31.571734    5170 start.go:128] duration metric: took 2.308044166s to createHost
	I0725 11:19:31.571800    5170 start.go:83] releasing machines lock for "calico-411000", held for 2.308503333s
	W0725 11:19:31.572167    5170 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:31.582715    5170 out.go:177] 
	W0725 11:19:31.586622    5170 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:19:31.586640    5170 out.go:239] * 
	* 
	W0725 11:19:31.588617    5170 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:19:31.596616    5170 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.790063208s)

                                                
                                                
-- stdout --
	* [custom-flannel-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-411000" primary control-plane node in "custom-flannel-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:19:33.953591    5287 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:19:33.953724    5287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:33.953730    5287 out.go:304] Setting ErrFile to fd 2...
	I0725 11:19:33.953733    5287 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:33.953871    5287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:19:33.954928    5287 out.go:298] Setting JSON to false
	I0725 11:19:33.971115    5287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4737,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:19:33.971211    5287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:19:33.978079    5287 out.go:177] * [custom-flannel-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:19:33.985927    5287 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:19:33.985975    5287 notify.go:220] Checking for updates...
	I0725 11:19:33.992986    5287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:19:33.995916    5287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:19:33.998931    5287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:19:34.001967    5287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:19:34.004861    5287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:19:34.008234    5287 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:19:34.008295    5287 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:19:34.008344    5287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:19:34.011879    5287 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:19:34.030915    5287 start.go:297] selected driver: qemu2
	I0725 11:19:34.030922    5287 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:19:34.030929    5287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:19:34.033103    5287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:19:34.035981    5287 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:19:34.039009    5287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:19:34.039024    5287 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0725 11:19:34.039040    5287 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0725 11:19:34.039065    5287 start.go:340] cluster config:
	{Name:custom-flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:19:34.042451    5287 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:19:34.049740    5287 out.go:177] * Starting "custom-flannel-411000" primary control-plane node in "custom-flannel-411000" cluster
	I0725 11:19:34.053892    5287 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:19:34.053908    5287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:19:34.053917    5287 cache.go:56] Caching tarball of preloaded images
	I0725 11:19:34.053973    5287 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:19:34.053979    5287 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:19:34.054053    5287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/custom-flannel-411000/config.json ...
	I0725 11:19:34.054064    5287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/custom-flannel-411000/config.json: {Name:mk296764649b9319ecc5436a6eea36ab4d718c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:19:34.054318    5287 start.go:360] acquireMachinesLock for custom-flannel-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:34.054350    5287 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "custom-flannel-411000"
	I0725 11:19:34.054360    5287 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:34.054397    5287 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:34.058849    5287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:34.075072    5287 start.go:159] libmachine.API.Create for "custom-flannel-411000" (driver="qemu2")
	I0725 11:19:34.075101    5287 client.go:168] LocalClient.Create starting
	I0725 11:19:34.075160    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:34.075192    5287 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:34.075203    5287 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:34.075242    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:34.075264    5287 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:34.075270    5287 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:34.075718    5287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:34.231875    5287 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:34.294971    5287 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:34.294976    5287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:34.295147    5287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:34.304580    5287 main.go:141] libmachine: STDOUT: 
	I0725 11:19:34.304601    5287 main.go:141] libmachine: STDERR: 
	I0725 11:19:34.304649    5287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2 +20000M
	I0725 11:19:34.312523    5287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:34.312536    5287 main.go:141] libmachine: STDERR: 
	I0725 11:19:34.312558    5287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:34.312563    5287 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:34.312576    5287 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:34.312599    5287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c4:90:9b:29:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:34.314159    5287 main.go:141] libmachine: STDOUT: 
	I0725 11:19:34.314174    5287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:34.314195    5287 client.go:171] duration metric: took 239.097875ms to LocalClient.Create
	I0725 11:19:36.316359    5287 start.go:128] duration metric: took 2.262008917s to createHost
	I0725 11:19:36.316441    5287 start.go:83] releasing machines lock for "custom-flannel-411000", held for 2.262155s
	W0725 11:19:36.316535    5287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:36.322779    5287 out.go:177] * Deleting "custom-flannel-411000" in qemu2 ...
	W0725 11:19:36.349349    5287 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:36.349375    5287 start.go:729] Will try again in 5 seconds ...
	I0725 11:19:41.351444    5287 start.go:360] acquireMachinesLock for custom-flannel-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:41.351999    5287 start.go:364] duration metric: took 410.208µs to acquireMachinesLock for "custom-flannel-411000"
	I0725 11:19:41.352139    5287 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:41.352343    5287 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:41.360782    5287 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:41.410919    5287 start.go:159] libmachine.API.Create for "custom-flannel-411000" (driver="qemu2")
	I0725 11:19:41.411009    5287 client.go:168] LocalClient.Create starting
	I0725 11:19:41.411168    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:41.411246    5287 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:41.411261    5287 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:41.411332    5287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:41.411379    5287 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:41.411396    5287 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:41.412040    5287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:41.573628    5287 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:41.657065    5287 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:41.657072    5287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:41.657263    5287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:41.666707    5287 main.go:141] libmachine: STDOUT: 
	I0725 11:19:41.666729    5287 main.go:141] libmachine: STDERR: 
	I0725 11:19:41.666788    5287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2 +20000M
	I0725 11:19:41.674837    5287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:41.674861    5287 main.go:141] libmachine: STDERR: 
	I0725 11:19:41.674874    5287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:41.674879    5287 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:41.674885    5287 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:41.674920    5287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b7:17:35:5e:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/custom-flannel-411000/disk.qcow2
	I0725 11:19:41.676626    5287 main.go:141] libmachine: STDOUT: 
	I0725 11:19:41.676639    5287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:41.676651    5287 client.go:171] duration metric: took 265.625416ms to LocalClient.Create
	I0725 11:19:43.678699    5287 start.go:128] duration metric: took 2.326403042s to createHost
	I0725 11:19:43.678734    5287 start.go:83] releasing machines lock for "custom-flannel-411000", held for 2.326789375s
	W0725 11:19:43.678983    5287 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:43.691324    5287 out.go:177] 
	W0725 11:19:43.694536    5287 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:19:43.694575    5287 out.go:239] * 
	* 
	W0725 11:19:43.695592    5287 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:19:43.706515    5287 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.781389416s)

                                                
                                                
-- stdout --
	* [false-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-411000" primary control-plane node in "false-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:19:46.060638    5407 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:19:46.060780    5407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:46.060783    5407 out.go:304] Setting ErrFile to fd 2...
	I0725 11:19:46.060786    5407 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:46.060925    5407 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:19:46.061917    5407 out.go:298] Setting JSON to false
	I0725 11:19:46.078013    5407 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4750,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:19:46.078083    5407 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:19:46.084985    5407 out.go:177] * [false-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:19:46.092084    5407 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:19:46.092135    5407 notify.go:220] Checking for updates...
	I0725 11:19:46.098035    5407 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:19:46.101039    5407 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:19:46.104136    5407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:19:46.107046    5407 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:19:46.110075    5407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:19:46.113354    5407 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:19:46.113420    5407 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:19:46.113470    5407 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:19:46.118056    5407 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:19:46.124884    5407 start.go:297] selected driver: qemu2
	I0725 11:19:46.124895    5407 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:19:46.124901    5407 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:19:46.127172    5407 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:19:46.130057    5407 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:19:46.133155    5407 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:19:46.133198    5407 cni.go:84] Creating CNI manager for "false"
	I0725 11:19:46.133237    5407 start.go:340] cluster config:
	{Name:false-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:19:46.137161    5407 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:19:46.143981    5407 out.go:177] * Starting "false-411000" primary control-plane node in "false-411000" cluster
	I0725 11:19:46.147860    5407 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:19:46.147885    5407 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:19:46.147898    5407 cache.go:56] Caching tarball of preloaded images
	I0725 11:19:46.147977    5407 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:19:46.147983    5407 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:19:46.148044    5407 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/false-411000/config.json ...
	I0725 11:19:46.148054    5407 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/false-411000/config.json: {Name:mk7fb8440e4d1e3f01f156107803c7c87d5a186d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:19:46.148366    5407 start.go:360] acquireMachinesLock for false-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:46.148396    5407 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "false-411000"
	I0725 11:19:46.148407    5407 start.go:93] Provisioning new machine with config: &{Name:false-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:46.148436    5407 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:46.152047    5407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:46.167417    5407 start.go:159] libmachine.API.Create for "false-411000" (driver="qemu2")
	I0725 11:19:46.167439    5407 client.go:168] LocalClient.Create starting
	I0725 11:19:46.167504    5407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:46.167539    5407 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:46.167548    5407 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:46.167589    5407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:46.167617    5407 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:46.167624    5407 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:46.167964    5407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:46.319940    5407 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:46.359855    5407 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:46.359860    5407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:46.360026    5407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:46.369191    5407 main.go:141] libmachine: STDOUT: 
	I0725 11:19:46.369214    5407 main.go:141] libmachine: STDERR: 
	I0725 11:19:46.369277    5407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2 +20000M
	I0725 11:19:46.377243    5407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:46.377262    5407 main.go:141] libmachine: STDERR: 
	I0725 11:19:46.377274    5407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:46.377278    5407 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:46.377289    5407 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:46.377316    5407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:a1:46:35:55:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:46.378986    5407 main.go:141] libmachine: STDOUT: 
	I0725 11:19:46.379005    5407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:46.379023    5407 client.go:171] duration metric: took 211.587791ms to LocalClient.Create
	I0725 11:19:48.381336    5407 start.go:128] duration metric: took 2.232926708s to createHost
	I0725 11:19:48.381472    5407 start.go:83] releasing machines lock for "false-411000", held for 2.233138042s
	W0725 11:19:48.381534    5407 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:48.395968    5407 out.go:177] * Deleting "false-411000" in qemu2 ...
	W0725 11:19:48.423441    5407 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:48.423477    5407 start.go:729] Will try again in 5 seconds ...
	I0725 11:19:53.425598    5407 start.go:360] acquireMachinesLock for false-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:53.426241    5407 start.go:364] duration metric: took 432.791µs to acquireMachinesLock for "false-411000"
	I0725 11:19:53.426397    5407 start.go:93] Provisioning new machine with config: &{Name:false-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:53.426619    5407 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:53.435322    5407 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:53.482553    5407 start.go:159] libmachine.API.Create for "false-411000" (driver="qemu2")
	I0725 11:19:53.482602    5407 client.go:168] LocalClient.Create starting
	I0725 11:19:53.482715    5407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:53.482767    5407 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:53.482783    5407 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:53.482835    5407 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:53.482873    5407 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:53.482883    5407 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:53.483365    5407 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:53.643585    5407 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:53.751698    5407 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:53.751704    5407 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:53.751864    5407 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:53.761362    5407 main.go:141] libmachine: STDOUT: 
	I0725 11:19:53.761384    5407 main.go:141] libmachine: STDERR: 
	I0725 11:19:53.761435    5407 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2 +20000M
	I0725 11:19:53.769622    5407 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:53.769640    5407 main.go:141] libmachine: STDERR: 
	I0725 11:19:53.769652    5407 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:53.769657    5407 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:53.769676    5407 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:53.769709    5407 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:3e:2c:6b:6e:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/false-411000/disk.qcow2
	I0725 11:19:53.771353    5407 main.go:141] libmachine: STDOUT: 
	I0725 11:19:53.771379    5407 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:53.771392    5407 client.go:171] duration metric: took 288.792375ms to LocalClient.Create
	I0725 11:19:55.773533    5407 start.go:128] duration metric: took 2.346954375s to createHost
	I0725 11:19:55.773613    5407 start.go:83] releasing machines lock for "false-411000", held for 2.347420958s
	W0725 11:19:55.773948    5407 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:19:55.783695    5407 out.go:177] 
	W0725 11:19:55.789861    5407 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:19:55.789995    5407 out.go:239] * 
	* 
	W0725 11:19:55.792391    5407 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:19:55.799710    5407 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.730997417s)

                                                
                                                
-- stdout --
	* [kindnet-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-411000" primary control-plane node in "kindnet-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:19:57.982141    5520 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:19:57.982275    5520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:57.982278    5520 out.go:304] Setting ErrFile to fd 2...
	I0725 11:19:57.982281    5520 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:19:57.982422    5520 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:19:57.983499    5520 out.go:298] Setting JSON to false
	I0725 11:19:57.999670    5520 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4761,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:19:57.999739    5520 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:19:58.006696    5520 out.go:177] * [kindnet-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:19:58.014693    5520 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:19:58.014754    5520 notify.go:220] Checking for updates...
	I0725 11:19:58.020597    5520 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:19:58.023632    5520 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:19:58.024947    5520 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:19:58.027600    5520 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:19:58.030640    5520 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:19:58.034062    5520 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:19:58.034130    5520 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:19:58.034169    5520 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:19:58.038606    5520 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:19:58.045601    5520 start.go:297] selected driver: qemu2
	I0725 11:19:58.045609    5520 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:19:58.045615    5520 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:19:58.047876    5520 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:19:58.050579    5520 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:19:58.053683    5520 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:19:58.053701    5520 cni.go:84] Creating CNI manager for "kindnet"
	I0725 11:19:58.053708    5520 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 11:19:58.053744    5520 start.go:340] cluster config:
	{Name:kindnet-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:19:58.057687    5520 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:19:58.064571    5520 out.go:177] * Starting "kindnet-411000" primary control-plane node in "kindnet-411000" cluster
	I0725 11:19:58.068614    5520 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:19:58.068648    5520 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:19:58.068662    5520 cache.go:56] Caching tarball of preloaded images
	I0725 11:19:58.068721    5520 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:19:58.068727    5520 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:19:58.068788    5520 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kindnet-411000/config.json ...
	I0725 11:19:58.068802    5520 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kindnet-411000/config.json: {Name:mkcae217103bb7cd24568de74c54fe9e4d46e6ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:19:58.069148    5520 start.go:360] acquireMachinesLock for kindnet-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:19:58.069195    5520 start.go:364] duration metric: took 39.458µs to acquireMachinesLock for "kindnet-411000"
	I0725 11:19:58.069211    5520 start.go:93] Provisioning new machine with config: &{Name:kindnet-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:19:58.069246    5520 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:19:58.073610    5520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:19:58.090808    5520 start.go:159] libmachine.API.Create for "kindnet-411000" (driver="qemu2")
	I0725 11:19:58.090835    5520 client.go:168] LocalClient.Create starting
	I0725 11:19:58.090888    5520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:19:58.090915    5520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:58.090927    5520 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:58.090961    5520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:19:58.090983    5520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:19:58.090991    5520 main.go:141] libmachine: Parsing certificate...
	I0725 11:19:58.091324    5520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:19:58.241278    5520 main.go:141] libmachine: Creating SSH key...
	I0725 11:19:58.299307    5520 main.go:141] libmachine: Creating Disk image...
	I0725 11:19:58.299314    5520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:19:58.299478    5520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:19:58.308870    5520 main.go:141] libmachine: STDOUT: 
	I0725 11:19:58.308889    5520 main.go:141] libmachine: STDERR: 
	I0725 11:19:58.308949    5520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2 +20000M
	I0725 11:19:58.316889    5520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:19:58.316915    5520 main.go:141] libmachine: STDERR: 
	I0725 11:19:58.316931    5520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:19:58.316936    5520 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:19:58.316951    5520 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:19:58.316983    5520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:9c:d5:3d:88:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:19:58.318722    5520 main.go:141] libmachine: STDOUT: 
	I0725 11:19:58.318737    5520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:19:58.318762    5520 client.go:171] duration metric: took 227.928958ms to LocalClient.Create
	I0725 11:20:00.320894    5520 start.go:128] duration metric: took 2.251691791s to createHost
	I0725 11:20:00.320972    5520 start.go:83] releasing machines lock for "kindnet-411000", held for 2.251839125s
	W0725 11:20:00.321063    5520 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:00.331723    5520 out.go:177] * Deleting "kindnet-411000" in qemu2 ...
	W0725 11:20:00.356221    5520 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:00.356247    5520 start.go:729] Will try again in 5 seconds ...
	I0725 11:20:05.357199    5520 start.go:360] acquireMachinesLock for kindnet-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:05.357658    5520 start.go:364] duration metric: took 369.166µs to acquireMachinesLock for "kindnet-411000"
	I0725 11:20:05.357775    5520 start.go:93] Provisioning new machine with config: &{Name:kindnet-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:05.358000    5520 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:05.362390    5520 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:05.402906    5520 start.go:159] libmachine.API.Create for "kindnet-411000" (driver="qemu2")
	I0725 11:20:05.402963    5520 client.go:168] LocalClient.Create starting
	I0725 11:20:05.403100    5520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:05.403158    5520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:05.403171    5520 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:05.403226    5520 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:05.403267    5520 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:05.403278    5520 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:05.403748    5520 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:05.566171    5520 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:05.624920    5520 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:05.624926    5520 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:05.625110    5520 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:20:05.634632    5520 main.go:141] libmachine: STDOUT: 
	I0725 11:20:05.634648    5520 main.go:141] libmachine: STDERR: 
	I0725 11:20:05.634696    5520 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2 +20000M
	I0725 11:20:05.642544    5520 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:05.642566    5520 main.go:141] libmachine: STDERR: 
	I0725 11:20:05.642587    5520 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:20:05.642592    5520 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:05.642599    5520 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:05.642630    5520 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fa:5b:4d:c6:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kindnet-411000/disk.qcow2
	I0725 11:20:05.644324    5520 main.go:141] libmachine: STDOUT: 
	I0725 11:20:05.644341    5520 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:05.644354    5520 client.go:171] duration metric: took 241.392417ms to LocalClient.Create
	I0725 11:20:07.646472    5520 start.go:128] duration metric: took 2.288507209s to createHost
	I0725 11:20:07.646532    5520 start.go:83] releasing machines lock for "kindnet-411000", held for 2.288931333s
	W0725 11:20:07.646878    5520 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:07.655548    5520 out.go:177] 
	W0725 11:20:07.661620    5520 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:20:07.661646    5520 out.go:239] * 
	* 
	W0725 11:20:07.664299    5520 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:20:07.672494    5520 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.849585958s)

                                                
                                                
-- stdout --
	* [flannel-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-411000" primary control-plane node in "flannel-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:09.972610    5633 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:09.972733    5633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:09.972735    5633 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:09.972738    5633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:09.972872    5633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:20:09.973932    5633 out.go:298] Setting JSON to false
	I0725 11:20:09.990359    5633 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4773,"bootTime":1721926836,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:20:09.990417    5633 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:09.997196    5633 out.go:177] * [flannel-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:20:10.005128    5633 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:10.005175    5633 notify.go:220] Checking for updates...
	I0725 11:20:10.010459    5633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:20:10.013090    5633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:20:10.016125    5633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:10.019129    5633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:20:10.022168    5633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:10.025402    5633 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:10.025470    5633 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:20:10.025534    5633 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:10.030053    5633 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:20:10.037129    5633 start.go:297] selected driver: qemu2
	I0725 11:20:10.037138    5633 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:20:10.037145    5633 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:10.039412    5633 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:10.042137    5633 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:20:10.045190    5633 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:10.045212    5633 cni.go:84] Creating CNI manager for "flannel"
	I0725 11:20:10.045229    5633 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0725 11:20:10.045257    5633 start.go:340] cluster config:
	{Name:flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:10.048700    5633 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:10.055959    5633 out.go:177] * Starting "flannel-411000" primary control-plane node in "flannel-411000" cluster
	I0725 11:20:10.060122    5633 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:10.060137    5633 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:20:10.060147    5633 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:10.060204    5633 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:20:10.060213    5633 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:10.060271    5633 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/flannel-411000/config.json ...
	I0725 11:20:10.060282    5633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/flannel-411000/config.json: {Name:mkd8ffd07c0d603ac2956b7e76f6f2fdf129f187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:10.060596    5633 start.go:360] acquireMachinesLock for flannel-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:10.060631    5633 start.go:364] duration metric: took 29.917µs to acquireMachinesLock for "flannel-411000"
	I0725 11:20:10.060642    5633 start.go:93] Provisioning new machine with config: &{Name:flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:10.060674    5633 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:10.062500    5633 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:10.078215    5633 start.go:159] libmachine.API.Create for "flannel-411000" (driver="qemu2")
	I0725 11:20:10.078257    5633 client.go:168] LocalClient.Create starting
	I0725 11:20:10.078314    5633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:10.078344    5633 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:10.078352    5633 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:10.078391    5633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:10.078413    5633 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:10.078422    5633 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:10.078847    5633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:10.229660    5633 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:10.327965    5633 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:10.327971    5633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:10.328157    5633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:10.337580    5633 main.go:141] libmachine: STDOUT: 
	I0725 11:20:10.337600    5633 main.go:141] libmachine: STDERR: 
	I0725 11:20:10.337653    5633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2 +20000M
	I0725 11:20:10.345621    5633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:10.345636    5633 main.go:141] libmachine: STDERR: 
	I0725 11:20:10.345670    5633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:10.345674    5633 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:10.345691    5633 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:10.345725    5633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a0:2c:d5:d1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:10.347322    5633 main.go:141] libmachine: STDOUT: 
	I0725 11:20:10.347336    5633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:10.347357    5633 client.go:171] duration metric: took 269.103166ms to LocalClient.Create
	I0725 11:20:12.349543    5633 start.go:128] duration metric: took 2.288912833s to createHost
	I0725 11:20:12.349637    5633 start.go:83] releasing machines lock for "flannel-411000", held for 2.289070917s
	W0725 11:20:12.349710    5633 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:12.361089    5633 out.go:177] * Deleting "flannel-411000" in qemu2 ...
	W0725 11:20:12.391288    5633 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:12.391324    5633 start.go:729] Will try again in 5 seconds ...
	I0725 11:20:17.393439    5633 start.go:360] acquireMachinesLock for flannel-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:17.394110    5633 start.go:364] duration metric: took 525.375µs to acquireMachinesLock for "flannel-411000"
	I0725 11:20:17.394411    5633 start.go:93] Provisioning new machine with config: &{Name:flannel-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:17.394685    5633 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:17.403469    5633 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:17.454258    5633 start.go:159] libmachine.API.Create for "flannel-411000" (driver="qemu2")
	I0725 11:20:17.454309    5633 client.go:168] LocalClient.Create starting
	I0725 11:20:17.454415    5633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:17.454480    5633 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:17.454497    5633 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:17.454568    5633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:17.454612    5633 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:17.454625    5633 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:17.455156    5633 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:17.616443    5633 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:17.725768    5633 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:17.725774    5633 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:17.725951    5633 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:17.735344    5633 main.go:141] libmachine: STDOUT: 
	I0725 11:20:17.735363    5633 main.go:141] libmachine: STDERR: 
	I0725 11:20:17.735423    5633 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2 +20000M
	I0725 11:20:17.743483    5633 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:17.743505    5633 main.go:141] libmachine: STDERR: 
	I0725 11:20:17.743515    5633 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:17.743521    5633 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:17.743533    5633 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:17.743565    5633 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:88:4c:d0:76:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/flannel-411000/disk.qcow2
	I0725 11:20:17.745172    5633 main.go:141] libmachine: STDOUT: 
	I0725 11:20:17.745193    5633 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:17.745215    5633 client.go:171] duration metric: took 290.910917ms to LocalClient.Create
	I0725 11:20:19.747382    5633 start.go:128] duration metric: took 2.352715s to createHost
	I0725 11:20:19.747510    5633 start.go:83] releasing machines lock for "flannel-411000", held for 2.353433542s
	W0725 11:20:19.747980    5633 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:19.762717    5633 out.go:177] 
	W0725 11:20:19.766832    5633 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:20:19.766865    5633 out.go:239] * 
	* 
	W0725 11:20:19.769688    5633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:20:19.780670    5633 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.827252458s)

                                                
                                                
-- stdout --
	* [enable-default-cni-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-411000" primary control-plane node in "enable-default-cni-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:22.170121    5753 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:22.170244    5753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:22.170247    5753 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:22.170250    5753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:22.170397    5753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:20:22.171442    5753 out.go:298] Setting JSON to false
	I0725 11:20:22.187900    5753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4786,"bootTime":1721926836,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:20:22.187966    5753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:22.194970    5753 out.go:177] * [enable-default-cni-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:20:22.202899    5753 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:22.202937    5753 notify.go:220] Checking for updates...
	I0725 11:20:22.209910    5753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:20:22.212920    5753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:20:22.215936    5753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:22.218872    5753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:20:22.221859    5753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:22.225224    5753 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:22.225301    5753 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:20:22.225372    5753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:22.228780    5753 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:20:22.235867    5753 start.go:297] selected driver: qemu2
	I0725 11:20:22.235874    5753 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:20:22.235883    5753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:22.238176    5753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:22.240895    5753 out.go:177] * Automatically selected the socket_vmnet network
	E0725 11:20:22.243944    5753 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0725 11:20:22.243957    5753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:22.243970    5753 cni.go:84] Creating CNI manager for "bridge"
	I0725 11:20:22.243975    5753 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:20:22.244015    5753 start.go:340] cluster config:
	{Name:enable-default-cni-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:22.247632    5753 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:22.252856    5753 out.go:177] * Starting "enable-default-cni-411000" primary control-plane node in "enable-default-cni-411000" cluster
	I0725 11:20:22.256874    5753 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:22.256892    5753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:20:22.256910    5753 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:22.256974    5753 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:20:22.256980    5753 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:22.257048    5753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/enable-default-cni-411000/config.json ...
	I0725 11:20:22.257060    5753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/enable-default-cni-411000/config.json: {Name:mk690d60468ed172f348408f1ad1b24a5e24bd75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:22.257390    5753 start.go:360] acquireMachinesLock for enable-default-cni-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:22.257426    5753 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "enable-default-cni-411000"
	I0725 11:20:22.257438    5753 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:22.257472    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:22.265912    5753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:22.282521    5753 start.go:159] libmachine.API.Create for "enable-default-cni-411000" (driver="qemu2")
	I0725 11:20:22.282547    5753 client.go:168] LocalClient.Create starting
	I0725 11:20:22.282611    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:22.282641    5753 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:22.282651    5753 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:22.282690    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:22.282716    5753 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:22.282723    5753 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:22.283135    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:22.433983    5753 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:22.491614    5753 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:22.491624    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:22.491839    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:22.502089    5753 main.go:141] libmachine: STDOUT: 
	I0725 11:20:22.502114    5753 main.go:141] libmachine: STDERR: 
	I0725 11:20:22.502187    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2 +20000M
	I0725 11:20:22.511484    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:22.511518    5753 main.go:141] libmachine: STDERR: 
	I0725 11:20:22.511536    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:22.511540    5753 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:22.511556    5753 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:22.511585    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:05:6d:a1:b0:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:22.513630    5753 main.go:141] libmachine: STDOUT: 
	I0725 11:20:22.513648    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:22.513667    5753 client.go:171] duration metric: took 231.121458ms to LocalClient.Create
	I0725 11:20:24.515788    5753 start.go:128] duration metric: took 2.258368s to createHost
	I0725 11:20:24.515861    5753 start.go:83] releasing machines lock for "enable-default-cni-411000", held for 2.258497833s
	W0725 11:20:24.516029    5753 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:24.530403    5753 out.go:177] * Deleting "enable-default-cni-411000" in qemu2 ...
	W0725 11:20:24.555410    5753 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:24.555435    5753 start.go:729] Will try again in 5 seconds ...
	I0725 11:20:29.555576    5753 start.go:360] acquireMachinesLock for enable-default-cni-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:29.556064    5753 start.go:364] duration metric: took 408.041µs to acquireMachinesLock for "enable-default-cni-411000"
	I0725 11:20:29.556167    5753 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:29.556311    5753 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:29.565785    5753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:29.605727    5753 start.go:159] libmachine.API.Create for "enable-default-cni-411000" (driver="qemu2")
	I0725 11:20:29.605771    5753 client.go:168] LocalClient.Create starting
	I0725 11:20:29.605887    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:29.605943    5753 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:29.605961    5753 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:29.606040    5753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:29.606079    5753 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:29.606088    5753 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:29.606619    5753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:29.764801    5753 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:29.905535    5753 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:29.905544    5753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:29.905845    5753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:29.915655    5753 main.go:141] libmachine: STDOUT: 
	I0725 11:20:29.915677    5753 main.go:141] libmachine: STDERR: 
	I0725 11:20:29.915741    5753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2 +20000M
	I0725 11:20:29.924004    5753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:29.924019    5753 main.go:141] libmachine: STDERR: 
	I0725 11:20:29.924030    5753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:29.924035    5753 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:29.924055    5753 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:29.924080    5753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:70:70:ee:68:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/enable-default-cni-411000/disk.qcow2
	I0725 11:20:29.925715    5753 main.go:141] libmachine: STDOUT: 
	I0725 11:20:29.925731    5753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:29.925754    5753 client.go:171] duration metric: took 319.979209ms to LocalClient.Create
	I0725 11:20:31.927908    5753 start.go:128] duration metric: took 2.371641875s to createHost
	I0725 11:20:31.927985    5753 start.go:83] releasing machines lock for "enable-default-cni-411000", held for 2.371979833s
	W0725 11:20:31.928421    5753 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:31.939981    5753 out.go:177] 
	W0725 11:20:31.944118    5753 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:20:31.944148    5753 out.go:239] * 
	* 
	W0725 11:20:31.946730    5753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:20:31.954001    5753 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (10.034343792s)

                                                
                                                
-- stdout --
	* [bridge-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-411000" primary control-plane node in "bridge-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:34.167433    5865 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:34.167571    5865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:34.167577    5865 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:34.167580    5865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:34.167722    5865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:20:34.168936    5865 out.go:298] Setting JSON to false
	I0725 11:20:34.185501    5865 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4798,"bootTime":1721926836,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:20:34.185562    5865 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:34.188716    5865 out.go:177] * [bridge-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:20:34.195585    5865 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:34.195634    5865 notify.go:220] Checking for updates...
	I0725 11:20:34.202579    5865 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:20:34.205616    5865 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:20:34.208488    5865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:34.211650    5865 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:20:34.214603    5865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:34.216056    5865 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:34.216121    5865 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:20:34.216181    5865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:34.220567    5865 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:20:34.227412    5865 start.go:297] selected driver: qemu2
	I0725 11:20:34.227419    5865 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:20:34.227424    5865 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:34.229495    5865 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:34.232570    5865 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:20:34.235669    5865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:34.235715    5865 cni.go:84] Creating CNI manager for "bridge"
	I0725 11:20:34.235719    5865 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:20:34.235760    5865 start.go:340] cluster config:
	{Name:bridge-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:34.239179    5865 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:34.246549    5865 out.go:177] * Starting "bridge-411000" primary control-plane node in "bridge-411000" cluster
	I0725 11:20:34.250626    5865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:34.250642    5865 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:20:34.250654    5865 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:34.250723    5865 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:20:34.250729    5865 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:34.250796    5865 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/bridge-411000/config.json ...
	I0725 11:20:34.250808    5865 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/bridge-411000/config.json: {Name:mk34818701c6ba90a525d29378f88b5f1381960d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:34.251017    5865 start.go:360] acquireMachinesLock for bridge-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:34.251049    5865 start.go:364] duration metric: took 26.834µs to acquireMachinesLock for "bridge-411000"
	I0725 11:20:34.251060    5865 start.go:93] Provisioning new machine with config: &{Name:bridge-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:34.251095    5865 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:34.258569    5865 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:34.275367    5865 start.go:159] libmachine.API.Create for "bridge-411000" (driver="qemu2")
	I0725 11:20:34.275397    5865 client.go:168] LocalClient.Create starting
	I0725 11:20:34.275461    5865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:34.275492    5865 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:34.275501    5865 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:34.275540    5865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:34.275562    5865 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:34.275569    5865 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:34.275959    5865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:34.427953    5865 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:34.656206    5865 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:34.656215    5865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:34.656420    5865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:34.665789    5865 main.go:141] libmachine: STDOUT: 
	I0725 11:20:34.665814    5865 main.go:141] libmachine: STDERR: 
	I0725 11:20:34.665869    5865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2 +20000M
	I0725 11:20:34.673769    5865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:34.673783    5865 main.go:141] libmachine: STDERR: 
	I0725 11:20:34.673795    5865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:34.673800    5865 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:34.673814    5865 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:34.673842    5865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:cf:14:30:db:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:34.675471    5865 main.go:141] libmachine: STDOUT: 
	I0725 11:20:34.675484    5865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:34.675504    5865 client.go:171] duration metric: took 400.115125ms to LocalClient.Create
	I0725 11:20:36.677800    5865 start.go:128] duration metric: took 2.426747625s to createHost
	I0725 11:20:36.677890    5865 start.go:83] releasing machines lock for "bridge-411000", held for 2.426911125s
	W0725 11:20:36.677937    5865 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:36.684738    5865 out.go:177] * Deleting "bridge-411000" in qemu2 ...
	W0725 11:20:36.714349    5865 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:36.714383    5865 start.go:729] Will try again in 5 seconds ...
	I0725 11:20:41.716483    5865 start.go:360] acquireMachinesLock for bridge-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:41.717165    5865 start.go:364] duration metric: took 519.208µs to acquireMachinesLock for "bridge-411000"
	I0725 11:20:41.717239    5865 start.go:93] Provisioning new machine with config: &{Name:bridge-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:41.717489    5865 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:41.727260    5865 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:41.775290    5865 start.go:159] libmachine.API.Create for "bridge-411000" (driver="qemu2")
	I0725 11:20:41.775336    5865 client.go:168] LocalClient.Create starting
	I0725 11:20:41.775466    5865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:41.775546    5865 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:41.775563    5865 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:41.775628    5865 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:41.775674    5865 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:41.775692    5865 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:41.776342    5865 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:41.936529    5865 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:42.111631    5865 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:42.111647    5865 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:42.111843    5865 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:42.121501    5865 main.go:141] libmachine: STDOUT: 
	I0725 11:20:42.121523    5865 main.go:141] libmachine: STDERR: 
	I0725 11:20:42.121573    5865 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2 +20000M
	I0725 11:20:42.129520    5865 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:42.129538    5865 main.go:141] libmachine: STDERR: 
	I0725 11:20:42.129552    5865 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:42.129561    5865 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:42.129570    5865 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:42.129612    5865 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:bd:59:d9:3e:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/bridge-411000/disk.qcow2
	I0725 11:20:42.131311    5865 main.go:141] libmachine: STDOUT: 
	I0725 11:20:42.131334    5865 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:42.131353    5865 client.go:171] duration metric: took 356.023167ms to LocalClient.Create
	I0725 11:20:44.133607    5865 start.go:128] duration metric: took 2.416100917s to createHost
	I0725 11:20:44.133695    5865 start.go:83] releasing machines lock for "bridge-411000", held for 2.416586875s
	W0725 11:20:44.134023    5865 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:44.144589    5865 out.go:177] 
	W0725 11:20:44.148588    5865 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:20:44.148615    5865 out.go:239] * 
	* 
	W0725 11:20:44.149952    5865 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:20:44.159464    5865 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-411000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.817534875s)

                                                
                                                
-- stdout --
	* [kubenet-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-411000" primary control-plane node in "kubenet-411000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-411000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:46.334668    5977 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:46.334810    5977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:46.334814    5977 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:46.334816    5977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:46.334964    5977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:20:46.336342    5977 out.go:298] Setting JSON to false
	I0725 11:20:46.354745    5977 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4810,"bootTime":1721926836,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:20:46.354831    5977 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:46.360061    5977 out.go:177] * [kubenet-411000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:20:46.368079    5977 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:46.368176    5977 notify.go:220] Checking for updates...
	I0725 11:20:46.373252    5977 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:20:46.376054    5977 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:20:46.379119    5977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:46.382124    5977 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:20:46.385077    5977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:46.388526    5977 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:46.388593    5977 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:20:46.388643    5977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:46.393124    5977 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:20:46.400043    5977 start.go:297] selected driver: qemu2
	I0725 11:20:46.400049    5977 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:20:46.400055    5977 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:46.402262    5977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:46.405107    5977 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:20:46.408145    5977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:46.408159    5977 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0725 11:20:46.408180    5977 start.go:340] cluster config:
	{Name:kubenet-411000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:46.411558    5977 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:46.418857    5977 out.go:177] * Starting "kubenet-411000" primary control-plane node in "kubenet-411000" cluster
	I0725 11:20:46.423088    5977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:20:46.423106    5977 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:20:46.423117    5977 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:46.423172    5977 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:20:46.423179    5977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:20:46.423257    5977 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kubenet-411000/config.json ...
	I0725 11:20:46.423268    5977 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/kubenet-411000/config.json: {Name:mk17edd884478de9f4352381941b75945de1853e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:46.423598    5977 start.go:360] acquireMachinesLock for kubenet-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:46.423630    5977 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "kubenet-411000"
	I0725 11:20:46.423640    5977 start.go:93] Provisioning new machine with config: &{Name:kubenet-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:46.423670    5977 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:46.431042    5977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:46.446570    5977 start.go:159] libmachine.API.Create for "kubenet-411000" (driver="qemu2")
	I0725 11:20:46.446602    5977 client.go:168] LocalClient.Create starting
	I0725 11:20:46.446667    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:46.446697    5977 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:46.446706    5977 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:46.446748    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:46.446774    5977 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:46.446784    5977 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:46.447192    5977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:46.599081    5977 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:46.659496    5977 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:46.659508    5977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:46.659671    5977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:46.668845    5977 main.go:141] libmachine: STDOUT: 
	I0725 11:20:46.668867    5977 main.go:141] libmachine: STDERR: 
	I0725 11:20:46.668911    5977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2 +20000M
	I0725 11:20:46.676990    5977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:46.677010    5977 main.go:141] libmachine: STDERR: 
	I0725 11:20:46.677031    5977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:46.677034    5977 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:46.677046    5977 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:46.677076    5977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:3a:6a:8c:e2:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:46.678742    5977 main.go:141] libmachine: STDOUT: 
	I0725 11:20:46.678761    5977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:46.678779    5977 client.go:171] duration metric: took 232.181459ms to LocalClient.Create
	I0725 11:20:48.680922    5977 start.go:128] duration metric: took 2.25729825s to createHost
	I0725 11:20:48.681074    5977 start.go:83] releasing machines lock for "kubenet-411000", held for 2.257486458s
	W0725 11:20:48.681244    5977 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:48.694646    5977 out.go:177] * Deleting "kubenet-411000" in qemu2 ...
	W0725 11:20:48.724745    5977 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:48.724799    5977 start.go:729] Will try again in 5 seconds ...
	I0725 11:20:53.726895    5977 start.go:360] acquireMachinesLock for kubenet-411000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:53.727545    5977 start.go:364] duration metric: took 537.75µs to acquireMachinesLock for "kubenet-411000"
	I0725 11:20:53.727702    5977 start.go:93] Provisioning new machine with config: &{Name:kubenet-411000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-411000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:53.728026    5977 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:53.736617    5977 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0725 11:20:53.786223    5977 start.go:159] libmachine.API.Create for "kubenet-411000" (driver="qemu2")
	I0725 11:20:53.786292    5977 client.go:168] LocalClient.Create starting
	I0725 11:20:53.786404    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:53.786467    5977 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:53.786481    5977 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:53.786544    5977 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:53.786589    5977 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:53.786608    5977 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:53.787159    5977 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:53.946154    5977 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:54.057963    5977 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:54.057976    5977 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:54.058173    5977 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:54.068853    5977 main.go:141] libmachine: STDOUT: 
	I0725 11:20:54.068876    5977 main.go:141] libmachine: STDERR: 
	I0725 11:20:54.068945    5977 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2 +20000M
	I0725 11:20:54.078275    5977 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:54.078296    5977 main.go:141] libmachine: STDERR: 
	I0725 11:20:54.078317    5977 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:54.078324    5977 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:54.078336    5977 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:54.078363    5977 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a1:13:51:d7:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/kubenet-411000/disk.qcow2
	I0725 11:20:54.080304    5977 main.go:141] libmachine: STDOUT: 
	I0725 11:20:54.080319    5977 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:54.080331    5977 client.go:171] duration metric: took 294.042875ms to LocalClient.Create
	I0725 11:20:56.082473    5977 start.go:128] duration metric: took 2.354462166s to createHost
	I0725 11:20:56.082589    5977 start.go:83] releasing machines lock for "kubenet-411000", held for 2.355091959s
	W0725 11:20:56.083061    5977 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-411000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:20:56.095721    5977 out.go:177] 
	W0725 11:20:56.099750    5977 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:20:56.099774    5977 out.go:239] * 
	* 
	W0725 11:20:56.101408    5977 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:20:56.108687    5977 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.807413333s)

                                                
                                                
-- stdout --
	* [old-k8s-version-309000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-309000" primary control-plane node in "old-k8s-version-309000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-309000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:20:58.266702    6088 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:20:58.266836    6088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:58.266839    6088 out.go:304] Setting ErrFile to fd 2...
	I0725 11:20:58.266842    6088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:20:58.266966    6088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:20:58.268051    6088 out.go:298] Setting JSON to false
	I0725 11:20:58.284294    6088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4822,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:20:58.284397    6088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:20:58.290229    6088 out.go:177] * [old-k8s-version-309000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:20:58.298211    6088 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:20:58.298225    6088 notify.go:220] Checking for updates...
	I0725 11:20:58.305238    6088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:20:58.308250    6088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:20:58.311292    6088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:20:58.314183    6088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:20:58.317216    6088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:20:58.320669    6088 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:20:58.320736    6088 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:20:58.320786    6088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:20:58.325127    6088 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:20:58.332222    6088 start.go:297] selected driver: qemu2
	I0725 11:20:58.332228    6088 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:20:58.332236    6088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:20:58.334399    6088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:20:58.337184    6088 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:20:58.340318    6088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:20:58.340343    6088 cni.go:84] Creating CNI manager for ""
	I0725 11:20:58.340351    6088 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 11:20:58.340379    6088 start.go:340] cluster config:
	{Name:old-k8s-version-309000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:20:58.343888    6088 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:20:58.349175    6088 out.go:177] * Starting "old-k8s-version-309000" primary control-plane node in "old-k8s-version-309000" cluster
	I0725 11:20:58.353182    6088 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 11:20:58.353197    6088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 11:20:58.353206    6088 cache.go:56] Caching tarball of preloaded images
	I0725 11:20:58.353260    6088 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:20:58.353265    6088 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0725 11:20:58.353333    6088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/old-k8s-version-309000/config.json ...
	I0725 11:20:58.353343    6088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/old-k8s-version-309000/config.json: {Name:mkf6d79fccc306c87f85e45f64ac1e7877f067af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:20:58.353655    6088 start.go:360] acquireMachinesLock for old-k8s-version-309000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:20:58.353688    6088 start.go:364] duration metric: took 23.959µs to acquireMachinesLock for "old-k8s-version-309000"
	I0725 11:20:58.353698    6088 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:20:58.353730    6088 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:20:58.358231    6088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:20:58.372987    6088 start.go:159] libmachine.API.Create for "old-k8s-version-309000" (driver="qemu2")
	I0725 11:20:58.373011    6088 client.go:168] LocalClient.Create starting
	I0725 11:20:58.373069    6088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:20:58.373099    6088 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:58.373109    6088 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:58.373148    6088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:20:58.373170    6088 main.go:141] libmachine: Decoding PEM data...
	I0725 11:20:58.373178    6088 main.go:141] libmachine: Parsing certificate...
	I0725 11:20:58.373605    6088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:20:58.525138    6088 main.go:141] libmachine: Creating SSH key...
	I0725 11:20:58.651775    6088 main.go:141] libmachine: Creating Disk image...
	I0725 11:20:58.651785    6088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:20:58.651976    6088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:20:58.661706    6088 main.go:141] libmachine: STDOUT: 
	I0725 11:20:58.661724    6088 main.go:141] libmachine: STDERR: 
	I0725 11:20:58.661777    6088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2 +20000M
	I0725 11:20:58.669657    6088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:20:58.669669    6088 main.go:141] libmachine: STDERR: 
	I0725 11:20:58.669681    6088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:20:58.669686    6088 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:20:58.669700    6088 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:20:58.669732    6088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:24:f7:7f:cc:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:20:58.671431    6088 main.go:141] libmachine: STDOUT: 
	I0725 11:20:58.671444    6088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:20:58.671462    6088 client.go:171] duration metric: took 298.456667ms to LocalClient.Create
	I0725 11:21:00.673527    6088 start.go:128] duration metric: took 2.319859708s to createHost
	I0725 11:21:00.673582    6088 start.go:83] releasing machines lock for "old-k8s-version-309000", held for 2.319964125s
	W0725 11:21:00.673612    6088 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:00.684725    6088 out.go:177] * Deleting "old-k8s-version-309000" in qemu2 ...
	W0725 11:21:00.705599    6088 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:00.705611    6088 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:05.707087    6088 start.go:360] acquireMachinesLock for old-k8s-version-309000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:05.707434    6088 start.go:364] duration metric: took 276.125µs to acquireMachinesLock for "old-k8s-version-309000"
	I0725 11:21:05.707520    6088 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:05.707712    6088 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:05.713186    6088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:05.755122    6088 start.go:159] libmachine.API.Create for "old-k8s-version-309000" (driver="qemu2")
	I0725 11:21:05.755211    6088 client.go:168] LocalClient.Create starting
	I0725 11:21:05.755327    6088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:05.755389    6088 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:05.755403    6088 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:05.755459    6088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:05.755498    6088 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:05.755510    6088 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:05.756039    6088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:05.913468    6088 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:05.971157    6088 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:05.971164    6088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:05.971349    6088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:21:05.993900    6088 main.go:141] libmachine: STDOUT: 
	I0725 11:21:05.993919    6088 main.go:141] libmachine: STDERR: 
	I0725 11:21:05.993973    6088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2 +20000M
	I0725 11:21:06.002346    6088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:06.002360    6088 main.go:141] libmachine: STDERR: 
	I0725 11:21:06.002372    6088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:21:06.002377    6088 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:06.002398    6088 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:06.002430    6088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:6e:d0:d9:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:21:06.004135    6088 main.go:141] libmachine: STDOUT: 
	I0725 11:21:06.004160    6088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:06.004173    6088 client.go:171] duration metric: took 248.965042ms to LocalClient.Create
	I0725 11:21:08.006291    6088 start.go:128] duration metric: took 2.298625166s to createHost
	I0725 11:21:08.006404    6088 start.go:83] releasing machines lock for "old-k8s-version-309000", held for 2.299010458s
	W0725 11:21:08.006785    6088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-309000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:08.016369    6088 out.go:177] 
	W0725 11:21:08.020449    6088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:08.020481    6088 out.go:239] * 
	* 
	W0725 11:21:08.022955    6088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:08.036319    6088 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (66.21125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-309000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-309000 create -f testdata/busybox.yaml: exit status 1 (30.430084ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-309000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (29.309042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (29.287167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-309000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-309000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-309000 describe deploy/metrics-server -n kube-system: exit status 1 (26.596333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-309000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (29.211375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191470667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-309000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-309000" primary control-plane node in "old-k8s-version-309000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-309000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-309000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:10.362977    6133 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:10.363086    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:10.363089    6133 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:10.363092    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:10.363234    6133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:10.364254    6133 out.go:298] Setting JSON to false
	I0725 11:21:10.380313    6133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4834,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:10.380380    6133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:10.384910    6133 out.go:177] * [old-k8s-version-309000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:10.392033    6133 notify.go:220] Checking for updates...
	I0725 11:21:10.395844    6133 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:10.398946    6133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:10.401942    6133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:10.405834    6133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:10.408922    6133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:10.411923    6133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:10.415083    6133 config.go:182] Loaded profile config "old-k8s-version-309000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0725 11:21:10.417834    6133 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 11:21:10.420899    6133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:10.424888    6133 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:21:10.431878    6133 start.go:297] selected driver: qemu2
	I0725 11:21:10.431885    6133 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-309000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:10.431938    6133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:10.434194    6133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:10.434243    6133 cni.go:84] Creating CNI manager for ""
	I0725 11:21:10.434250    6133 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 11:21:10.434275    6133 start.go:340] cluster config:
	{Name:old-k8s-version-309000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-309000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:10.437646    6133 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:10.444913    6133 out.go:177] * Starting "old-k8s-version-309000" primary control-plane node in "old-k8s-version-309000" cluster
	I0725 11:21:10.448807    6133 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 11:21:10.448823    6133 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 11:21:10.448835    6133 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:10.448897    6133 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:10.448905    6133 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0725 11:21:10.448973    6133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/old-k8s-version-309000/config.json ...
	I0725 11:21:10.449442    6133 start.go:360] acquireMachinesLock for old-k8s-version-309000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:10.449469    6133 start.go:364] duration metric: took 21.459µs to acquireMachinesLock for "old-k8s-version-309000"
	I0725 11:21:10.449478    6133 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:10.449482    6133 fix.go:54] fixHost starting: 
	I0725 11:21:10.449599    6133 fix.go:112] recreateIfNeeded on old-k8s-version-309000: state=Stopped err=<nil>
	W0725 11:21:10.449607    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:10.453915    6133 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-309000" ...
	I0725 11:21:10.461831    6133 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:10.461865    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:6e:d0:d9:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:21:10.463811    6133 main.go:141] libmachine: STDOUT: 
	I0725 11:21:10.463830    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:10.463857    6133 fix.go:56] duration metric: took 14.375125ms for fixHost
	I0725 11:21:10.463861    6133 start.go:83] releasing machines lock for "old-k8s-version-309000", held for 14.388667ms
	W0725 11:21:10.463868    6133 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:10.463908    6133 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:10.463912    6133 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:15.465987    6133 start.go:360] acquireMachinesLock for old-k8s-version-309000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:15.466506    6133 start.go:364] duration metric: took 400.333µs to acquireMachinesLock for "old-k8s-version-309000"
	I0725 11:21:15.466581    6133 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:15.466598    6133 fix.go:54] fixHost starting: 
	I0725 11:21:15.467252    6133 fix.go:112] recreateIfNeeded on old-k8s-version-309000: state=Stopped err=<nil>
	W0725 11:21:15.467272    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:15.475507    6133 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-309000" ...
	I0725 11:21:15.478633    6133 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:15.478929    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ae:6e:d0:d9:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/old-k8s-version-309000/disk.qcow2
	I0725 11:21:15.488330    6133 main.go:141] libmachine: STDOUT: 
	I0725 11:21:15.488396    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:15.488474    6133 fix.go:56] duration metric: took 21.879584ms for fixHost
	I0725 11:21:15.488489    6133 start.go:83] releasing machines lock for "old-k8s-version-309000", held for 21.964709ms
	W0725 11:21:15.488705    6133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-309000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-309000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:15.497505    6133 out.go:177] 
	W0725 11:21:15.501652    6133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:15.501680    6133 out.go:239] * 
	* 
	W0725 11:21:15.503275    6133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:15.516649    6133 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-309000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (54.539042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-309000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (32.111125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-309000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.503042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-309000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-309000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (28.771833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-309000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (30.087709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-309000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-309000 --alsologtostderr -v=1: exit status 83 (41.629125ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-309000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-309000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:15.771475    6154 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:15.772491    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:15.772495    6154 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:15.772497    6154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:15.772679    6154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:15.772866    6154 out.go:298] Setting JSON to false
	I0725 11:21:15.772873    6154 mustload.go:65] Loading cluster: old-k8s-version-309000
	I0725 11:21:15.773076    6154 config.go:182] Loaded profile config "old-k8s-version-309000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0725 11:21:15.777588    6154 out.go:177] * The control-plane node old-k8s-version-309000 host is not running: state=Stopped
	I0725 11:21:15.780788    6154 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-309000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-309000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (29.180959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (29.125541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-309000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.72835575s)

                                                
                                                
-- stdout --
	* [no-preload-422000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-422000" primary control-plane node in "no-preload-422000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-422000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:16.088576    6171 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:16.088746    6171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:16.088750    6171 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:16.088752    6171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:16.088876    6171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:16.090006    6171 out.go:298] Setting JSON to false
	I0725 11:21:16.106392    6171 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4840,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:16.106475    6171 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:16.111351    6171 out.go:177] * [no-preload-422000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:16.118459    6171 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:16.118520    6171 notify.go:220] Checking for updates...
	I0725 11:21:16.125410    6171 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:16.128440    6171 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:16.131416    6171 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:16.134356    6171 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:16.137391    6171 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:16.140783    6171 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:16.140846    6171 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:21:16.140902    6171 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:16.144329    6171 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:21:16.150294    6171 start.go:297] selected driver: qemu2
	I0725 11:21:16.150303    6171 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:21:16.150310    6171 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:16.152745    6171 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:21:16.155370    6171 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:21:16.158507    6171 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:16.158549    6171 cni.go:84] Creating CNI manager for ""
	I0725 11:21:16.158555    6171 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:16.158558    6171 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:21:16.158582    6171 start.go:340] cluster config:
	{Name:no-preload-422000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:16.162097    6171 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.169362    6171 out.go:177] * Starting "no-preload-422000" primary control-plane node in "no-preload-422000" cluster
	I0725 11:21:16.173361    6171 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:21:16.173436    6171 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/no-preload-422000/config.json ...
	I0725 11:21:16.173453    6171 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/no-preload-422000/config.json: {Name:mk8e1297f76a04351d9d73e0b1dd16b2a835dbd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:21:16.173473    6171 cache.go:107] acquiring lock: {Name:mk5653692817070271d2551157724158266313f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173473    6171 cache.go:107] acquiring lock: {Name:mkacbf98bc8792972e481cb22a106e61bc17a7a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173508    6171 cache.go:107] acquiring lock: {Name:mk6e0edc4c13d62aafa540441df668aa7d73a837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173518    6171 cache.go:107] acquiring lock: {Name:mkbb761be1797895c4e4b2d2d799a5dee28babe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173629    6171 cache.go:107] acquiring lock: {Name:mk04ecf0f8e1352974e4b7236c0a528c2d68022d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173636    6171 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0725 11:21:16.173662    6171 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 11:21:16.173678    6171 cache.go:107] acquiring lock: {Name:mkee2386a5f4dfba4c0b9e7a5b52292438adc43f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173660    6171 cache.go:107] acquiring lock: {Name:mk385070746c407194c8c713af5f997fa7e90199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173715    6171 cache.go:107] acquiring lock: {Name:mk3e5bc99c23fc45496fa1d020364c4e89bf927d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:16.173768    6171 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0725 11:21:16.173780    6171 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 11:21:16.173807    6171 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 11:21:16.173812    6171 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 340.916µs
	I0725 11:21:16.173818    6171 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 11:21:16.173822    6171 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 11:21:16.173873    6171 start.go:360] acquireMachinesLock for no-preload-422000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:16.173888    6171 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 11:21:16.173916    6171 start.go:364] duration metric: took 33.667µs to acquireMachinesLock for "no-preload-422000"
	I0725 11:21:16.173927    6171 start.go:93] Provisioning new machine with config: &{Name:no-preload-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:16.173975    6171 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:16.173996    6171 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 11:21:16.177432    6171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:16.180704    6171 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0725 11:21:16.180767    6171 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0725 11:21:16.180788    6171 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0725 11:21:16.180831    6171 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0725 11:21:16.180826    6171 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0725 11:21:16.180861    6171 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0725 11:21:16.180884    6171 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0725 11:21:16.193504    6171 start.go:159] libmachine.API.Create for "no-preload-422000" (driver="qemu2")
	I0725 11:21:16.193530    6171 client.go:168] LocalClient.Create starting
	I0725 11:21:16.193596    6171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:16.193623    6171 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:16.193631    6171 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:16.193671    6171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:16.193694    6171 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:16.193701    6171 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:16.194011    6171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:16.350377    6171 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:16.419553    6171 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:16.419580    6171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:16.419781    6171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:16.430158    6171 main.go:141] libmachine: STDOUT: 
	I0725 11:21:16.430176    6171 main.go:141] libmachine: STDERR: 
	I0725 11:21:16.430228    6171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2 +20000M
	I0725 11:21:16.439036    6171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:16.439063    6171 main.go:141] libmachine: STDERR: 
	I0725 11:21:16.439077    6171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:16.439083    6171 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:16.439097    6171 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:16.439131    6171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:87:3c:7a:2c:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:16.441579    6171 main.go:141] libmachine: STDOUT: 
	I0725 11:21:16.441599    6171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:16.441618    6171 client.go:171] duration metric: took 248.09225ms to LocalClient.Create
	I0725 11:21:16.559170    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0725 11:21:16.577167    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0725 11:21:16.587663    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0725 11:21:16.594296    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0725 11:21:16.613966    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0725 11:21:16.693137    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0725 11:21:16.721415    6171 cache.go:162] opening:  /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0725 11:21:16.781989    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0725 11:21:16.782003    6171 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 608.51325ms
	I0725 11:21:16.782014    6171 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0725 11:21:18.441688    6171 start.go:128] duration metric: took 2.267779166s to createHost
	I0725 11:21:18.441702    6171 start.go:83] releasing machines lock for "no-preload-422000", held for 2.267855833s
	W0725 11:21:18.441715    6171 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:18.448927    6171 out.go:177] * Deleting "no-preload-422000" in qemu2 ...
	W0725 11:21:18.458930    6171 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:18.458943    6171 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:19.697872    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0725 11:21:19.697906    6171 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.524342917s
	I0725 11:21:19.697940    6171 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0725 11:21:19.940589    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0725 11:21:19.940620    6171 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.767254625s
	I0725 11:21:19.940638    6171 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0725 11:21:20.255471    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0725 11:21:20.255493    6171 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 4.081968458s
	I0725 11:21:20.255503    6171 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0725 11:21:20.450064    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0725 11:21:20.450092    6171 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 4.276773s
	I0725 11:21:20.450108    6171 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0725 11:21:21.074015    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0725 11:21:21.074064    6171 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.900611917s
	I0725 11:21:21.074089    6171 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0725 11:21:23.459575    6171 start.go:360] acquireMachinesLock for no-preload-422000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:23.459991    6171 start.go:364] duration metric: took 353.208µs to acquireMachinesLock for "no-preload-422000"
	I0725 11:21:23.460106    6171 start.go:93] Provisioning new machine with config: &{Name:no-preload-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:23.460290    6171 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:23.467787    6171 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:23.508520    6171 start.go:159] libmachine.API.Create for "no-preload-422000" (driver="qemu2")
	I0725 11:21:23.508574    6171 client.go:168] LocalClient.Create starting
	I0725 11:21:23.508676    6171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:23.508757    6171 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:23.508776    6171 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:23.508860    6171 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:23.508902    6171 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:23.508913    6171 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:23.509595    6171 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:23.669049    6171 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:23.722913    6171 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:23.722919    6171 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:23.723090    6171 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:23.732484    6171 main.go:141] libmachine: STDOUT: 
	I0725 11:21:23.732505    6171 main.go:141] libmachine: STDERR: 
	I0725 11:21:23.732559    6171 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2 +20000M
	I0725 11:21:23.740962    6171 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:23.740992    6171 main.go:141] libmachine: STDERR: 
	I0725 11:21:23.741002    6171 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:23.741007    6171 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:23.741019    6171 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:23.741063    6171 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:05:5f:dd:b0:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:23.742792    6171 main.go:141] libmachine: STDOUT: 
	I0725 11:21:23.742818    6171 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:23.742830    6171 client.go:171] duration metric: took 234.2595ms to LocalClient.Create
	I0725 11:21:24.364507    6171 cache.go:157] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0725 11:21:24.364545    6171 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 8.191181125s
	I0725 11:21:24.364567    6171 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0725 11:21:24.364597    6171 cache.go:87] Successfully saved all images to host disk.
	I0725 11:21:25.745019    6171 start.go:128] duration metric: took 2.284755083s to createHost
	I0725 11:21:25.745119    6171 start.go:83] releasing machines lock for "no-preload-422000", held for 2.285181834s
	W0725 11:21:25.745404    6171 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-422000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-422000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:25.755915    6171 out.go:177] 
	W0725 11:21:25.763022    6171 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:25.763073    6171 out.go:239] * 
	* 
	W0725 11:21:25.764781    6171 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:25.773709    6171 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (54.324417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-422000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-422000 create -f testdata/busybox.yaml: exit status 1 (28.831083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-422000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-422000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (28.951125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (28.461583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-422000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-422000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-422000 describe deploy/metrics-server -n kube-system: exit status 1 (27.165625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-422000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-422000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (29.192375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.178576625s)

                                                
                                                
-- stdout --
	* [no-preload-422000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-422000" primary control-plane node in "no-preload-422000" cluster
	* Restarting existing qemu2 VM for "no-preload-422000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-422000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:28.184637    6246 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:28.184755    6246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:28.184758    6246 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:28.184761    6246 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:28.184907    6246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:28.185891    6246 out.go:298] Setting JSON to false
	I0725 11:21:28.201934    6246 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4852,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:28.202018    6246 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:28.206220    6246 out.go:177] * [no-preload-422000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:28.213213    6246 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:28.213277    6246 notify.go:220] Checking for updates...
	I0725 11:21:28.220135    6246 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:28.223205    6246 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:28.226283    6246 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:28.229180    6246 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:28.232271    6246 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:28.235451    6246 config.go:182] Loaded profile config "no-preload-422000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0725 11:21:28.235708    6246 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:28.239185    6246 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:21:28.246201    6246 start.go:297] selected driver: qemu2
	I0725 11:21:28.246208    6246 start.go:901] validating driver "qemu2" against &{Name:no-preload-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:28.246268    6246 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:28.248482    6246 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:28.248522    6246 cni.go:84] Creating CNI manager for ""
	I0725 11:21:28.248530    6246 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:28.248553    6246 start.go:340] cluster config:
	{Name:no-preload-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-422000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:28.251929    6246 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.259172    6246 out.go:177] * Starting "no-preload-422000" primary control-plane node in "no-preload-422000" cluster
	I0725 11:21:28.262157    6246 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:21:28.262239    6246 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/no-preload-422000/config.json ...
	I0725 11:21:28.262256    6246 cache.go:107] acquiring lock: {Name:mk5653692817070271d2551157724158266313f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262263    6246 cache.go:107] acquiring lock: {Name:mkacbf98bc8792972e481cb22a106e61bc17a7a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262283    6246 cache.go:107] acquiring lock: {Name:mk3e5bc99c23fc45496fa1d020364c4e89bf927d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262310    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0725 11:21:28.262322    6246 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 74.917µs
	I0725 11:21:28.262328    6246 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0725 11:21:28.262330    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0725 11:21:28.262336    6246 cache.go:107] acquiring lock: {Name:mkee2386a5f4dfba4c0b9e7a5b52292438adc43f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262337    6246 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 92.584µs
	I0725 11:21:28.262346    6246 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0725 11:21:28.262353    6246 cache.go:107] acquiring lock: {Name:mkbb761be1797895c4e4b2d2d799a5dee28babe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262365    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0725 11:21:28.262371    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0725 11:21:28.262375    6246 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 107.584µs
	I0725 11:21:28.262383    6246 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0725 11:21:28.262378    6246 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 42.416µs
	I0725 11:21:28.262388    6246 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0725 11:21:28.262392    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0725 11:21:28.262399    6246 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 47.25µs
	I0725 11:21:28.262402    6246 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0725 11:21:28.262356    6246 cache.go:107] acquiring lock: {Name:mk6e0edc4c13d62aafa540441df668aa7d73a837 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262415    6246 cache.go:107] acquiring lock: {Name:mk385070746c407194c8c713af5f997fa7e90199 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262428    6246 cache.go:107] acquiring lock: {Name:mk04ecf0f8e1352974e4b7236c0a528c2d68022d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:28.262453    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0725 11:21:28.262457    6246 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 131.875µs
	I0725 11:21:28.262466    6246 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0725 11:21:28.262474    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0725 11:21:28.262486    6246 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 92µs
	I0725 11:21:28.262490    6246 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0725 11:21:28.262478    6246 cache.go:115] /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0725 11:21:28.262498    6246 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 129.416µs
	I0725 11:21:28.262501    6246 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0725 11:21:28.262504    6246 cache.go:87] Successfully saved all images to host disk.
	I0725 11:21:28.262620    6246 start.go:360] acquireMachinesLock for no-preload-422000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:28.262650    6246 start.go:364] duration metric: took 23.666µs to acquireMachinesLock for "no-preload-422000"
	I0725 11:21:28.262660    6246 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:28.262666    6246 fix.go:54] fixHost starting: 
	I0725 11:21:28.262783    6246 fix.go:112] recreateIfNeeded on no-preload-422000: state=Stopped err=<nil>
	W0725 11:21:28.262792    6246 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:28.270195    6246 out.go:177] * Restarting existing qemu2 VM for "no-preload-422000" ...
	I0725 11:21:28.274212    6246 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:28.274257    6246 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:05:5f:dd:b0:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:28.276315    6246 main.go:141] libmachine: STDOUT: 
	I0725 11:21:28.276336    6246 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:28.276365    6246 fix.go:56] duration metric: took 13.699584ms for fixHost
	I0725 11:21:28.276369    6246 start.go:83] releasing machines lock for "no-preload-422000", held for 13.715416ms
	W0725 11:21:28.276374    6246 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:28.276400    6246 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:28.276405    6246 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:33.278575    6246 start.go:360] acquireMachinesLock for no-preload-422000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:33.279031    6246 start.go:364] duration metric: took 349.667µs to acquireMachinesLock for "no-preload-422000"
	I0725 11:21:33.279181    6246 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:33.279205    6246 fix.go:54] fixHost starting: 
	I0725 11:21:33.279927    6246 fix.go:112] recreateIfNeeded on no-preload-422000: state=Stopped err=<nil>
	W0725 11:21:33.279954    6246 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:33.284816    6246 out.go:177] * Restarting existing qemu2 VM for "no-preload-422000" ...
	I0725 11:21:33.291408    6246 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:33.291637    6246 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:05:5f:dd:b0:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/no-preload-422000/disk.qcow2
	I0725 11:21:33.299476    6246 main.go:141] libmachine: STDOUT: 
	I0725 11:21:33.299838    6246 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:33.299909    6246 fix.go:56] duration metric: took 20.710333ms for fixHost
	I0725 11:21:33.299928    6246 start.go:83] releasing machines lock for "no-preload-422000", held for 20.874333ms
	W0725 11:21:33.300069    6246 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-422000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-422000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:33.307445    6246 out.go:177] 
	W0725 11:21:33.311564    6246 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:33.311597    6246 out.go:239] * 
	* 
	W0725 11:21:33.313194    6246 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:33.326314    6246 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-422000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (53.719625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-422000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (30.483708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-422000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-422000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-422000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.694167ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-422000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-422000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (28.543834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-422000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (27.989708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-422000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-422000 --alsologtostderr -v=1: exit status 83 (38.225167ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-422000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-422000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:33.574535    6265 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:33.574692    6265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:33.574696    6265 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:33.574698    6265 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:33.574820    6265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:33.575042    6265 out.go:298] Setting JSON to false
	I0725 11:21:33.575049    6265 mustload.go:65] Loading cluster: no-preload-422000
	I0725 11:21:33.575286    6265 config.go:182] Loaded profile config "no-preload-422000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0725 11:21:33.579178    6265 out.go:177] * The control-plane node no-preload-422000 host is not running: state=Stopped
	I0725 11:21:33.582107    6265 out.go:177]   To start a cluster, run: "minikube start -p no-preload-422000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-422000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (28.586875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (29.573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-422000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.77569675s)

                                                
                                                
-- stdout --
	* [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-205000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:33.875585    6282 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:33.875721    6282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:33.875724    6282 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:33.875732    6282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:33.875856    6282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:33.876912    6282 out.go:298] Setting JSON to false
	I0725 11:21:33.893304    6282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4857,"bootTime":1721926836,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:33.893371    6282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:33.898178    6282 out.go:177] * [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:33.904212    6282 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:33.904268    6282 notify.go:220] Checking for updates...
	I0725 11:21:33.911181    6282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:33.914165    6282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:33.917168    6282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:33.920044    6282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:33.923147    6282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:33.926563    6282 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:33.926624    6282 config.go:182] Loaded profile config "stopped-upgrade-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0725 11:21:33.926674    6282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:33.930040    6282 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:21:33.937126    6282 start.go:297] selected driver: qemu2
	I0725 11:21:33.937132    6282 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:21:33.937138    6282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:33.939436    6282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:21:33.940602    6282 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:21:33.943199    6282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:33.943216    6282 cni.go:84] Creating CNI manager for ""
	I0725 11:21:33.943222    6282 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:33.943225    6282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:21:33.943251    6282 start.go:340] cluster config:
	{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:33.946836    6282 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:33.955097    6282 out.go:177] * Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	I0725 11:21:33.959137    6282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:21:33.959157    6282 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:21:33.959166    6282 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:33.959224    6282 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:33.959229    6282 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:21:33.959281    6282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/embed-certs-205000/config.json ...
	I0725 11:21:33.959292    6282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/embed-certs-205000/config.json: {Name:mk941ab4ff3764734f93323450a512f244f8f62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:21:33.959638    6282 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:33.959669    6282 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "embed-certs-205000"
	I0725 11:21:33.959684    6282 start.go:93] Provisioning new machine with config: &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:33.959710    6282 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:33.967141    6282 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:33.983891    6282 start.go:159] libmachine.API.Create for "embed-certs-205000" (driver="qemu2")
	I0725 11:21:33.983923    6282 client.go:168] LocalClient.Create starting
	I0725 11:21:33.983987    6282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:33.984018    6282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:33.984032    6282 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:33.984069    6282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:33.984093    6282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:33.984099    6282 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:33.984533    6282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:34.145842    6282 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:34.212389    6282 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:34.212395    6282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:34.212561    6282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:34.221659    6282 main.go:141] libmachine: STDOUT: 
	I0725 11:21:34.221674    6282 main.go:141] libmachine: STDERR: 
	I0725 11:21:34.221752    6282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2 +20000M
	I0725 11:21:34.229613    6282 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:34.229625    6282 main.go:141] libmachine: STDERR: 
	I0725 11:21:34.229642    6282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:34.229645    6282 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:34.229661    6282 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:34.229685    6282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:af:cf:12:13:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:34.231270    6282 main.go:141] libmachine: STDOUT: 
	I0725 11:21:34.231285    6282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:34.231303    6282 client.go:171] duration metric: took 247.382834ms to LocalClient.Create
	I0725 11:21:36.233436    6282 start.go:128] duration metric: took 2.273777542s to createHost
	I0725 11:21:36.233507    6282 start.go:83] releasing machines lock for "embed-certs-205000", held for 2.27390175s
	W0725 11:21:36.233583    6282 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:36.249647    6282 out.go:177] * Deleting "embed-certs-205000" in qemu2 ...
	W0725 11:21:36.272985    6282 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:36.273007    6282 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:41.275131    6282 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:41.275595    6282 start.go:364] duration metric: took 357.583µs to acquireMachinesLock for "embed-certs-205000"
	I0725 11:21:41.275793    6282 start.go:93] Provisioning new machine with config: &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:41.276015    6282 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:41.287498    6282 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:41.337738    6282 start.go:159] libmachine.API.Create for "embed-certs-205000" (driver="qemu2")
	I0725 11:21:41.337792    6282 client.go:168] LocalClient.Create starting
	I0725 11:21:41.337916    6282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:41.337984    6282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:41.338003    6282 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:41.338068    6282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:41.338113    6282 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:41.338128    6282 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:41.338651    6282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:41.505627    6282 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:41.562654    6282 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:41.562660    6282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:41.562825    6282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:41.572193    6282 main.go:141] libmachine: STDOUT: 
	I0725 11:21:41.572213    6282 main.go:141] libmachine: STDERR: 
	I0725 11:21:41.572252    6282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2 +20000M
	I0725 11:21:41.580030    6282 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:41.580062    6282 main.go:141] libmachine: STDERR: 
	I0725 11:21:41.580076    6282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:41.580080    6282 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:41.580088    6282 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:41.580128    6282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:08:81:3f:73:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:41.581761    6282 main.go:141] libmachine: STDOUT: 
	I0725 11:21:41.581776    6282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:41.581787    6282 client.go:171] duration metric: took 243.99875ms to LocalClient.Create
	I0725 11:21:43.583895    6282 start.go:128] duration metric: took 2.307919291s to createHost
	I0725 11:21:43.583944    6282 start.go:83] releasing machines lock for "embed-certs-205000", held for 2.30840025s
	W0725 11:21:43.584328    6282 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:43.597896    6282 out.go:177] 
	W0725 11:21:43.600868    6282 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:43.600892    6282 out.go:239] * 
	* 
	W0725 11:21:43.603611    6282 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:43.610804    6282 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (63.066458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (11.071673083s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-986000" primary control-plane node in "default-k8s-diff-port-986000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:35.011331    6302 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:35.011537    6302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:35.011540    6302 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:35.011542    6302 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:35.011669    6302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:35.012734    6302 out.go:298] Setting JSON to false
	I0725 11:21:35.028874    6302 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4859,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:35.028935    6302 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:35.033106    6302 out.go:177] * [default-k8s-diff-port-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:35.040149    6302 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:35.040209    6302 notify.go:220] Checking for updates...
	I0725 11:21:35.047119    6302 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:35.050111    6302 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:35.053044    6302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:35.056097    6302 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:35.059109    6302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:35.062303    6302 config.go:182] Loaded profile config "embed-certs-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:35.062374    6302 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:35.062423    6302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:35.067021    6302 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:21:35.074058    6302 start.go:297] selected driver: qemu2
	I0725 11:21:35.074066    6302 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:21:35.074073    6302 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:35.076327    6302 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 11:21:35.079063    6302 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:21:35.082148    6302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:35.082177    6302 cni.go:84] Creating CNI manager for ""
	I0725 11:21:35.082186    6302 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:35.082190    6302 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:21:35.082220    6302 start.go:340] cluster config:
	{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:35.085978    6302 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:35.093084    6302 out.go:177] * Starting "default-k8s-diff-port-986000" primary control-plane node in "default-k8s-diff-port-986000" cluster
	I0725 11:21:35.097039    6302 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:21:35.097059    6302 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:21:35.097069    6302 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:35.097143    6302 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:35.097149    6302 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:21:35.097206    6302 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/default-k8s-diff-port-986000/config.json ...
	I0725 11:21:35.097217    6302 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/default-k8s-diff-port-986000/config.json: {Name:mk4a031bfd87d9be6b99c2aa1a010eb1bfa77bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:21:35.097540    6302 start.go:360] acquireMachinesLock for default-k8s-diff-port-986000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:36.233634    6302 start.go:364] duration metric: took 1.136108958s to acquireMachinesLock for "default-k8s-diff-port-986000"
	I0725 11:21:36.233841    6302 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:36.234065    6302 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:36.242681    6302 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:36.292249    6302 start.go:159] libmachine.API.Create for "default-k8s-diff-port-986000" (driver="qemu2")
	I0725 11:21:36.292308    6302 client.go:168] LocalClient.Create starting
	I0725 11:21:36.292431    6302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:36.292487    6302 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:36.292502    6302 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:36.292566    6302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:36.292604    6302 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:36.292614    6302 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:36.293226    6302 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:36.457155    6302 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:36.580509    6302 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:36.580515    6302 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:36.580696    6302 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:36.589916    6302 main.go:141] libmachine: STDOUT: 
	I0725 11:21:36.589930    6302 main.go:141] libmachine: STDERR: 
	I0725 11:21:36.589978    6302 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2 +20000M
	I0725 11:21:36.597691    6302 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:36.597702    6302 main.go:141] libmachine: STDERR: 
	I0725 11:21:36.597721    6302 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:36.597726    6302 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:36.597740    6302 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:36.597766    6302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:2a:f0:e9:48:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:36.599315    6302 main.go:141] libmachine: STDOUT: 
	I0725 11:21:36.599327    6302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:36.599346    6302 client.go:171] duration metric: took 307.043334ms to LocalClient.Create
	I0725 11:21:38.601420    6302 start.go:128] duration metric: took 2.367383125s to createHost
	I0725 11:21:38.601471    6302 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 2.367876542s
	W0725 11:21:38.601549    6302 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:38.618111    6302 out.go:177] * Deleting "default-k8s-diff-port-986000" in qemu2 ...
	W0725 11:21:38.646238    6302 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:38.646261    6302 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:43.648237    6302 start.go:360] acquireMachinesLock for default-k8s-diff-port-986000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:43.648411    6302 start.go:364] duration metric: took 129.333µs to acquireMachinesLock for "default-k8s-diff-port-986000"
	I0725 11:21:43.648479    6302 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:43.648597    6302 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:43.656808    6302 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:43.685800    6302 start.go:159] libmachine.API.Create for "default-k8s-diff-port-986000" (driver="qemu2")
	I0725 11:21:43.685834    6302 client.go:168] LocalClient.Create starting
	I0725 11:21:43.685916    6302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:43.685957    6302 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:43.685972    6302 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:43.686017    6302 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:43.686039    6302 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:43.686047    6302 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:43.686431    6302 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:43.877719    6302 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:43.989610    6302 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:43.989617    6302 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:43.989779    6302 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:43.998631    6302 main.go:141] libmachine: STDOUT: 
	I0725 11:21:43.998647    6302 main.go:141] libmachine: STDERR: 
	I0725 11:21:43.998690    6302 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2 +20000M
	I0725 11:21:44.006817    6302 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:44.006832    6302 main.go:141] libmachine: STDERR: 
	I0725 11:21:44.006843    6302 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:44.006847    6302 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:44.006861    6302 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:44.006888    6302 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a0:16:e1:49:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:44.008655    6302 main.go:141] libmachine: STDOUT: 
	I0725 11:21:44.008670    6302 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:44.008682    6302 client.go:171] duration metric: took 322.854709ms to LocalClient.Create
	I0725 11:21:46.010126    6302 start.go:128] duration metric: took 2.361590459s to createHost
	I0725 11:21:46.010164    6302 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 2.361817666s
	W0725 11:21:46.010270    6302 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:46.025593    6302 out.go:177] 
	W0725 11:21:46.033540    6302 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:46.033559    6302 out.go:239] * 
	* 
	W0725 11:21:46.034444    6302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:46.045532    6302 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (44.685125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-205000 create -f testdata/busybox.yaml: exit status 1 (32.312958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-205000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (34.128875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (33.582292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-205000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system: exit status 1 (27.931208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-205000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.0795ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.263669167s)

                                                
                                                
-- stdout --
	* [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	* Restarting existing qemu2 VM for "embed-certs-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-205000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:45.858019    6349 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:45.858147    6349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:45.858151    6349 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:45.858153    6349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:45.858298    6349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:45.859358    6349 out.go:298] Setting JSON to false
	I0725 11:21:45.875362    6349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4869,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:45.875480    6349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:45.880593    6349 out.go:177] * [embed-certs-205000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:45.890592    6349 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:45.890626    6349 notify.go:220] Checking for updates...
	I0725 11:21:45.897547    6349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:45.900486    6349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:45.903521    6349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:45.906565    6349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:45.909421    6349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:45.912870    6349 config.go:182] Loaded profile config "embed-certs-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:45.913122    6349 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:45.917499    6349 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:21:45.924553    6349 start.go:297] selected driver: qemu2
	I0725 11:21:45.924560    6349 start.go:901] validating driver "qemu2" against &{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-205000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:45.924622    6349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:45.927123    6349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:45.927167    6349 cni.go:84] Creating CNI manager for ""
	I0725 11:21:45.927174    6349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:45.927202    6349 start.go:340] cluster config:
	{Name:embed-certs-205000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-205000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:45.930857    6349 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:45.938537    6349 out.go:177] * Starting "embed-certs-205000" primary control-plane node in "embed-certs-205000" cluster
	I0725 11:21:45.942509    6349 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:21:45.942523    6349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:21:45.942532    6349 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:45.942590    6349 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:45.942595    6349 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:21:45.942650    6349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/embed-certs-205000/config.json ...
	I0725 11:21:45.943070    6349 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:46.010198    6349 start.go:364] duration metric: took 67.122167ms to acquireMachinesLock for "embed-certs-205000"
	I0725 11:21:46.010226    6349 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:46.010234    6349 fix.go:54] fixHost starting: 
	I0725 11:21:46.010431    6349 fix.go:112] recreateIfNeeded on embed-certs-205000: state=Stopped err=<nil>
	W0725 11:21:46.010444    6349 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:46.021517    6349 out.go:177] * Restarting existing qemu2 VM for "embed-certs-205000" ...
	I0725 11:21:46.029457    6349 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:46.029523    6349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:08:81:3f:73:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:46.032742    6349 main.go:141] libmachine: STDOUT: 
	I0725 11:21:46.032772    6349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:46.032810    6349 fix.go:56] duration metric: took 22.576666ms for fixHost
	I0725 11:21:46.032816    6349 start.go:83] releasing machines lock for "embed-certs-205000", held for 22.611834ms
	W0725 11:21:46.032827    6349 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:46.032881    6349 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:46.032889    6349 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:51.034979    6349 start.go:360] acquireMachinesLock for embed-certs-205000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:51.035367    6349 start.go:364] duration metric: took 298.667µs to acquireMachinesLock for "embed-certs-205000"
	I0725 11:21:51.035515    6349 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:51.035535    6349 fix.go:54] fixHost starting: 
	I0725 11:21:51.036387    6349 fix.go:112] recreateIfNeeded on embed-certs-205000: state=Stopped err=<nil>
	W0725 11:21:51.036411    6349 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:51.044900    6349 out.go:177] * Restarting existing qemu2 VM for "embed-certs-205000" ...
	I0725 11:21:51.048980    6349 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:51.049172    6349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:08:81:3f:73:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/embed-certs-205000/disk.qcow2
	I0725 11:21:51.058101    6349 main.go:141] libmachine: STDOUT: 
	I0725 11:21:51.058172    6349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:51.058236    6349 fix.go:56] duration metric: took 22.700791ms for fixHost
	I0725 11:21:51.058251    6349 start.go:83] releasing machines lock for "embed-certs-205000", held for 22.8635ms
	W0725 11:21:51.058408    6349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-205000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:51.065961    6349 out.go:177] 
	W0725 11:21:51.070081    6349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:51.070104    6349 out.go:239] * 
	* 
	W0725 11:21:51.072525    6349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:51.081019    6349 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-205000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (65.422292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml: exit status 1 (28.289542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-986000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.440792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.064833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-986000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system: exit status 1 (26.754ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-986000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.780041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.520454542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-986000" primary control-plane node in "default-k8s-diff-port-986000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:48.557105    6384 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:48.557246    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:48.557250    6384 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:48.557257    6384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:48.557398    6384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:48.558358    6384 out.go:298] Setting JSON to false
	I0725 11:21:48.574228    6384 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4872,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:48.574308    6384 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:48.579007    6384 out.go:177] * [default-k8s-diff-port-986000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:48.585961    6384 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:48.586019    6384 notify.go:220] Checking for updates...
	I0725 11:21:48.592938    6384 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:48.596023    6384 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:48.599000    6384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:48.601993    6384 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:48.604944    6384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:48.608216    6384 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:48.608477    6384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:48.611869    6384 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:21:48.618932    6384 start.go:297] selected driver: qemu2
	I0725 11:21:48.618938    6384 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:48.618991    6384 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:48.621406    6384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 11:21:48.621461    6384 cni.go:84] Creating CNI manager for ""
	I0725 11:21:48.621468    6384 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:48.621497    6384 start.go:340] cluster config:
	{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-986000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:48.624994    6384 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:48.632981    6384 out.go:177] * Starting "default-k8s-diff-port-986000" primary control-plane node in "default-k8s-diff-port-986000" cluster
	I0725 11:21:48.636969    6384 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 11:21:48.636986    6384 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 11:21:48.636996    6384 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:48.637056    6384 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:48.637065    6384 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 11:21:48.637131    6384 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/default-k8s-diff-port-986000/config.json ...
	I0725 11:21:48.637553    6384 start.go:360] acquireMachinesLock for default-k8s-diff-port-986000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:48.637580    6384 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "default-k8s-diff-port-986000"
	I0725 11:21:48.637589    6384 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:48.637595    6384 fix.go:54] fixHost starting: 
	I0725 11:21:48.637712    6384 fix.go:112] recreateIfNeeded on default-k8s-diff-port-986000: state=Stopped err=<nil>
	W0725 11:21:48.637732    6384 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:48.640988    6384 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	I0725 11:21:48.648976    6384 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:48.649035    6384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a0:16:e1:49:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:48.651143    6384 main.go:141] libmachine: STDOUT: 
	I0725 11:21:48.651165    6384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:48.651194    6384 fix.go:56] duration metric: took 13.599042ms for fixHost
	I0725 11:21:48.651201    6384 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 13.617166ms
	W0725 11:21:48.651207    6384 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:48.651236    6384 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:48.651240    6384 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:53.653280    6384 start.go:360] acquireMachinesLock for default-k8s-diff-port-986000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:53.975163    6384 start.go:364] duration metric: took 321.707791ms to acquireMachinesLock for "default-k8s-diff-port-986000"
	I0725 11:21:53.975238    6384 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:21:53.975258    6384 fix.go:54] fixHost starting: 
	I0725 11:21:53.975954    6384 fix.go:112] recreateIfNeeded on default-k8s-diff-port-986000: state=Stopped err=<nil>
	W0725 11:21:53.975979    6384 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:21:53.985285    6384 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	I0725 11:21:53.998333    6384 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:53.998567    6384 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a0:16:e1:49:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I0725 11:21:54.008251    6384 main.go:141] libmachine: STDOUT: 
	I0725 11:21:54.008314    6384 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:54.008396    6384 fix.go:56] duration metric: took 33.138583ms for fixHost
	I0725 11:21:54.008412    6384 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 33.22175ms
	W0725 11:21:54.008617    6384 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:54.017368    6384 out.go:177] 
	W0725 11:21:54.021282    6384 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:21:54.021306    6384 out.go:239] * 
	* 
	W0725 11:21:54.023417    6384 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:21:54.033300    6384 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (64.476083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-205000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (31.140917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-205000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.378334ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-205000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-205000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (27.896167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-205000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (29.109917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1: exit status 83 (39.375834ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-205000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-205000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:51.343444    6403 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:51.343605    6403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:51.343609    6403 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:51.343615    6403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:51.343745    6403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:51.343949    6403 out.go:298] Setting JSON to false
	I0725 11:21:51.343955    6403 mustload.go:65] Loading cluster: embed-certs-205000
	I0725 11:21:51.344144    6403 config.go:182] Loaded profile config "embed-certs-205000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:51.347231    6403 out.go:177] * The control-plane node embed-certs-205000 host is not running: state=Stopped
	I0725 11:21:51.351386    6403 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-205000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-205000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (27.876583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (28.954542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-205000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.848158708s)

                                                
                                                
-- stdout --
	* [newest-cni-471000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-471000" primary control-plane node in "newest-cni-471000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-471000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:51.645816    6420 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:51.645931    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:51.645934    6420 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:51.645937    6420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:51.646063    6420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:51.647292    6420 out.go:298] Setting JSON to false
	I0725 11:21:51.663299    6420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4875,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:21:51.663366    6420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:21:51.667454    6420 out.go:177] * [newest-cni-471000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:21:51.673296    6420 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:21:51.673322    6420 notify.go:220] Checking for updates...
	I0725 11:21:51.680231    6420 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:21:51.683364    6420 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:21:51.686365    6420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:21:51.687629    6420 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:21:51.690333    6420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:21:51.693649    6420 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:51.693707    6420 config.go:182] Loaded profile config "multinode-638000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:51.693753    6420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:21:51.698187    6420 out.go:177] * Using the qemu2 driver based on user configuration
	I0725 11:21:51.705301    6420 start.go:297] selected driver: qemu2
	I0725 11:21:51.705308    6420 start.go:901] validating driver "qemu2" against <nil>
	I0725 11:21:51.705314    6420 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:21:51.707654    6420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0725 11:21:51.707675    6420 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0725 11:21:51.711097    6420 out.go:177] * Automatically selected the socket_vmnet network
	I0725 11:21:51.714422    6420 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 11:21:51.714435    6420 cni.go:84] Creating CNI manager for ""
	I0725 11:21:51.714445    6420 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:21:51.714449    6420 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 11:21:51.714475    6420 start.go:340] cluster config:
	{Name:newest-cni-471000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:21:51.717976    6420 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:21:51.725237    6420 out.go:177] * Starting "newest-cni-471000" primary control-plane node in "newest-cni-471000" cluster
	I0725 11:21:51.729371    6420 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:21:51.729388    6420 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0725 11:21:51.729394    6420 cache.go:56] Caching tarball of preloaded images
	I0725 11:21:51.729457    6420 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:21:51.729463    6420 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 11:21:51.729516    6420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/newest-cni-471000/config.json ...
	I0725 11:21:51.729527    6420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/newest-cni-471000/config.json: {Name:mk0892befdf73268377d430669589f190c8b8db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 11:21:51.729886    6420 start.go:360] acquireMachinesLock for newest-cni-471000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:51.729919    6420 start.go:364] duration metric: took 27.292µs to acquireMachinesLock for "newest-cni-471000"
	I0725 11:21:51.729931    6420 start.go:93] Provisioning new machine with config: &{Name:newest-cni-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:51.729959    6420 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:51.738321    6420 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:51.755674    6420 start.go:159] libmachine.API.Create for "newest-cni-471000" (driver="qemu2")
	I0725 11:21:51.755701    6420 client.go:168] LocalClient.Create starting
	I0725 11:21:51.755762    6420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:51.755790    6420 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:51.755799    6420 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:51.755839    6420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:51.755862    6420 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:51.755874    6420 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:51.756280    6420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:51.910600    6420 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:51.953874    6420 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:51.953880    6420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:51.954046    6420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:51.963206    6420 main.go:141] libmachine: STDOUT: 
	I0725 11:21:51.963226    6420 main.go:141] libmachine: STDERR: 
	I0725 11:21:51.963281    6420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2 +20000M
	I0725 11:21:51.971081    6420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:51.971094    6420 main.go:141] libmachine: STDERR: 
	I0725 11:21:51.971106    6420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:51.971111    6420 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:51.971123    6420 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:51.971152    6420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:4c:8f:ca:09:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:51.972751    6420 main.go:141] libmachine: STDOUT: 
	I0725 11:21:51.972776    6420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:51.972794    6420 client.go:171] duration metric: took 217.091292ms to LocalClient.Create
	I0725 11:21:53.974920    6420 start.go:128] duration metric: took 2.245018708s to createHost
	I0725 11:21:53.974982    6420 start.go:83] releasing machines lock for "newest-cni-471000", held for 2.245126541s
	W0725 11:21:53.975054    6420 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:53.995328    6420 out.go:177] * Deleting "newest-cni-471000" in qemu2 ...
	W0725 11:21:54.050320    6420 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:21:54.050355    6420 start.go:729] Will try again in 5 seconds ...
	I0725 11:21:59.052392    6420 start.go:360] acquireMachinesLock for newest-cni-471000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:21:59.052932    6420 start.go:364] duration metric: took 457.041µs to acquireMachinesLock for "newest-cni-471000"
	I0725 11:21:59.053085    6420 start.go:93] Provisioning new machine with config: &{Name:newest-cni-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0725 11:21:59.053371    6420 start.go:125] createHost starting for "" (driver="qemu2")
	I0725 11:21:59.058958    6420 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0725 11:21:59.108688    6420 start.go:159] libmachine.API.Create for "newest-cni-471000" (driver="qemu2")
	I0725 11:21:59.108743    6420 client.go:168] LocalClient.Create starting
	I0725 11:21:59.108883    6420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/ca.pem
	I0725 11:21:59.108950    6420 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:59.108965    6420 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:59.109025    6420 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19326-1196/.minikube/certs/cert.pem
	I0725 11:21:59.109069    6420 main.go:141] libmachine: Decoding PEM data...
	I0725 11:21:59.109109    6420 main.go:141] libmachine: Parsing certificate...
	I0725 11:21:59.109683    6420 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso...
	I0725 11:21:59.273983    6420 main.go:141] libmachine: Creating SSH key...
	I0725 11:21:59.405692    6420 main.go:141] libmachine: Creating Disk image...
	I0725 11:21:59.405698    6420 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0725 11:21:59.405863    6420 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2.raw /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:59.415349    6420 main.go:141] libmachine: STDOUT: 
	I0725 11:21:59.415365    6420 main.go:141] libmachine: STDERR: 
	I0725 11:21:59.415412    6420 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2 +20000M
	I0725 11:21:59.423236    6420 main.go:141] libmachine: STDOUT: Image resized.
	
	I0725 11:21:59.423251    6420 main.go:141] libmachine: STDERR: 
	I0725 11:21:59.423261    6420 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:59.423265    6420 main.go:141] libmachine: Starting QEMU VM...
	I0725 11:21:59.423281    6420 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:21:59.423312    6420 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ee:fd:12:61:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:21:59.424900    6420 main.go:141] libmachine: STDOUT: 
	I0725 11:21:59.424917    6420 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:21:59.424928    6420 client.go:171] duration metric: took 316.1905ms to LocalClient.Create
	I0725 11:22:01.427084    6420 start.go:128] duration metric: took 2.373753542s to createHost
	I0725 11:22:01.427182    6420 start.go:83] releasing machines lock for "newest-cni-471000", held for 2.374303625s
	W0725 11:22:01.427571    6420 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-471000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-471000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:22:01.437117    6420 out.go:177] 
	W0725 11:22:01.443159    6420 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:22:01.443262    6420 out.go:239] * 
	* 
	W0725 11:22:01.446055    6420 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:22:01.457142    6420 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (67.72225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-471000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-986000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (30.536125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-986000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.144625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-986000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (29.154333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-986000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.939375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1: exit status 83 (41.768ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-986000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-986000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:21:54.299712    6442 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:21:54.299864    6442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:54.299874    6442 out.go:304] Setting ErrFile to fd 2...
	I0725 11:21:54.299876    6442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:21:54.300005    6442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:21:54.300216    6442 out.go:298] Setting JSON to false
	I0725 11:21:54.300223    6442 mustload.go:65] Loading cluster: default-k8s-diff-port-986000
	I0725 11:21:54.300404    6442 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 11:21:54.304694    6442 out.go:177] * The control-plane node default-k8s-diff-port-986000 host is not running: state=Stopped
	I0725 11:21:54.310729    6442 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-986000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.230666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (28.109542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
E0725 11:22:10.175313    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.191504917s)

                                                
                                                
-- stdout --
	* [newest-cni-471000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-471000" primary control-plane node in "newest-cni-471000" cluster
	* Restarting existing qemu2 VM for "newest-cni-471000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-471000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:22:05.025279    6490 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:22:05.025399    6490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:22:05.025402    6490 out.go:304] Setting ErrFile to fd 2...
	I0725 11:22:05.025405    6490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:22:05.025528    6490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:22:05.026507    6490 out.go:298] Setting JSON to false
	I0725 11:22:05.042680    6490 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4889,"bootTime":1721926836,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 11:22:05.042746    6490 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 11:22:05.046842    6490 out.go:177] * [newest-cni-471000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 11:22:05.053840    6490 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 11:22:05.053907    6490 notify.go:220] Checking for updates...
	I0725 11:22:05.064837    6490 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 11:22:05.067753    6490 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 11:22:05.070785    6490 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 11:22:05.073820    6490 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 11:22:05.076675    6490 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 11:22:05.080086    6490 config.go:182] Loaded profile config "newest-cni-471000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0725 11:22:05.080379    6490 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 11:22:05.084775    6490 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 11:22:05.091795    6490 start.go:297] selected driver: qemu2
	I0725 11:22:05.091803    6490 start.go:901] validating driver "qemu2" against &{Name:newest-cni-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-471000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:22:05.091847    6490 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 11:22:05.094238    6490 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0725 11:22:05.094273    6490 cni.go:84] Creating CNI manager for ""
	I0725 11:22:05.094280    6490 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 11:22:05.094313    6490 start.go:340] cluster config:
	{Name:newest-cni-471000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-471000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 11:22:05.097952    6490 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 11:22:05.105711    6490 out.go:177] * Starting "newest-cni-471000" primary control-plane node in "newest-cni-471000" cluster
	I0725 11:22:05.109796    6490 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 11:22:05.109813    6490 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0725 11:22:05.109820    6490 cache.go:56] Caching tarball of preloaded images
	I0725 11:22:05.109881    6490 preload.go:172] Found /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 11:22:05.109886    6490 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 11:22:05.109954    6490 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/newest-cni-471000/config.json ...
	I0725 11:22:05.110387    6490 start.go:360] acquireMachinesLock for newest-cni-471000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:22:05.110415    6490 start.go:364] duration metric: took 22.042µs to acquireMachinesLock for "newest-cni-471000"
	I0725 11:22:05.110425    6490 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:22:05.110430    6490 fix.go:54] fixHost starting: 
	I0725 11:22:05.110540    6490 fix.go:112] recreateIfNeeded on newest-cni-471000: state=Stopped err=<nil>
	W0725 11:22:05.110548    6490 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:22:05.114776    6490 out.go:177] * Restarting existing qemu2 VM for "newest-cni-471000" ...
	I0725 11:22:05.122680    6490 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:22:05.122727    6490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ee:fd:12:61:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:22:05.124802    6490 main.go:141] libmachine: STDOUT: 
	I0725 11:22:05.124821    6490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:22:05.124847    6490 fix.go:56] duration metric: took 14.416959ms for fixHost
	I0725 11:22:05.124851    6490 start.go:83] releasing machines lock for "newest-cni-471000", held for 14.432208ms
	W0725 11:22:05.124858    6490 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:22:05.124895    6490 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:22:05.124900    6490 start.go:729] Will try again in 5 seconds ...
	I0725 11:22:10.127006    6490 start.go:360] acquireMachinesLock for newest-cni-471000: {Name:mk8a57bc247342caa80f52af4d0c8610a84c5028 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0725 11:22:10.127441    6490 start.go:364] duration metric: took 324.834µs to acquireMachinesLock for "newest-cni-471000"
	I0725 11:22:10.127595    6490 start.go:96] Skipping create...Using existing machine configuration
	I0725 11:22:10.127617    6490 fix.go:54] fixHost starting: 
	I0725 11:22:10.128367    6490 fix.go:112] recreateIfNeeded on newest-cni-471000: state=Stopped err=<nil>
	W0725 11:22:10.128392    6490 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 11:22:10.137996    6490 out.go:177] * Restarting existing qemu2 VM for "newest-cni-471000" ...
	I0725 11:22:10.142070    6490 qemu.go:418] Using hvf for hardware acceleration
	I0725 11:22:10.142299    6490 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ee:fd:12:61:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19326-1196/.minikube/machines/newest-cni-471000/disk.qcow2
	I0725 11:22:10.152160    6490 main.go:141] libmachine: STDOUT: 
	I0725 11:22:10.152231    6490 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0725 11:22:10.152338    6490 fix.go:56] duration metric: took 24.724083ms for fixHost
	I0725 11:22:10.152356    6490 start.go:83] releasing machines lock for "newest-cni-471000", held for 24.891417ms
	W0725 11:22:10.152542    6490 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-471000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-471000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0725 11:22:10.160963    6490 out.go:177] 
	W0725 11:22:10.165006    6490 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0725 11:22:10.165060    6490 out.go:239] * 
	* 
	W0725 11:22:10.167667    6490 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 11:22:10.175974    6490 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-471000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (67.120458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-471000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-471000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (29.917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-471000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-471000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-471000 --alsologtostderr -v=1: exit status 83 (39.637167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-471000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-471000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 11:22:10.357884    6506 out.go:291] Setting OutFile to fd 1 ...
	I0725 11:22:10.358037    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:22:10.358040    6506 out.go:304] Setting ErrFile to fd 2...
	I0725 11:22:10.358042    6506 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 11:22:10.358182    6506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 11:22:10.358410    6506 out.go:298] Setting JSON to false
	I0725 11:22:10.358417    6506 mustload.go:65] Loading cluster: newest-cni-471000
	I0725 11:22:10.358615    6506 config.go:182] Loaded profile config "newest-cni-471000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0725 11:22:10.362589    6506 out.go:177] * The control-plane node newest-cni-471000 host is not running: state=Stopped
	I0725 11:22:10.366573    6506 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-471000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-471000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (30.035166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-471000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (30.21475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-471000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (162/282)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 15.3
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.1
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.56
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.27
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 204.81
38 TestAddons/serial/Volcano 37.96
40 TestAddons/serial/GCPAuth/Namespaces 0.07
42 TestAddons/parallel/Registry 13.52
43 TestAddons/parallel/Ingress 18.1
44 TestAddons/parallel/InspektorGadget 10.23
45 TestAddons/parallel/MetricsServer 5.25
48 TestAddons/parallel/CSI 51.12
49 TestAddons/parallel/Headlamp 16.51
50 TestAddons/parallel/CloudSpanner 5.16
51 TestAddons/parallel/LocalPath 40.78
52 TestAddons/parallel/NvidiaDevicePlugin 5.14
53 TestAddons/parallel/Yakd 10.2
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.2
65 TestErrorSpam/setup 34.69
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.65
69 TestErrorSpam/unpause 0.61
70 TestErrorSpam/stop 64.29
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 51.25
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 62.16
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.48
82 TestFunctional/serial/CacheCmd/cache/add_local 1.12
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.66
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 38.78
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.68
93 TestFunctional/serial/LogsFileCmd 0.62
94 TestFunctional/serial/InvalidService 4.39
96 TestFunctional/parallel/ConfigCmd 0.21
97 TestFunctional/parallel/DashboardCmd 7.94
98 TestFunctional/parallel/DryRun 0.22
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 24.8
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.43
111 TestFunctional/parallel/FileSync 0.07
112 TestFunctional/parallel/CertSync 0.4
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
120 TestFunctional/parallel/License 0.21
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ServiceCmd/DeployApp 7.08
133 TestFunctional/parallel/ServiceCmd/List 0.28
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
136 TestFunctional/parallel/ServiceCmd/Format 0.1
137 TestFunctional/parallel/ServiceCmd/URL 0.1
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
139 TestFunctional/parallel/ProfileCmd/profile_list 0.12
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
141 TestFunctional/parallel/MountCmd/any-port 4.43
142 TestFunctional/parallel/MountCmd/specific-port 1.19
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.13
144 TestFunctional/parallel/Version/short 0.04
145 TestFunctional/parallel/Version/components 0.22
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
150 TestFunctional/parallel/ImageCommands/ImageBuild 1.61
151 TestFunctional/parallel/ImageCommands/Setup 1.83
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.51
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.19
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.23
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
159 TestFunctional/parallel/DockerEnv/bash 0.28
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 204.09
170 TestMultiControlPlane/serial/DeployApp 5.83
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 86.48
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.41
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 78.71
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 1.87
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.2
221 TestMainNoArgs 0.03
268 TestStoppedBinaryUpgrade/Setup 1.04
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
285 TestNoKubernetes/serial/ProfileList 31.36
286 TestNoKubernetes/serial/Stop 2.82
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
300 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
303 TestStartStop/group/old-k8s-version/serial/Stop 1.91
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
314 TestStartStop/group/no-preload/serial/Stop 1.99
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/embed-certs/serial/Stop 1.78
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.11
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
347 TestStartStop/group/newest-cni/serial/Stop 3.28
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-493000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-493000: exit status 85 (93.635708ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-493000 | jenkins | v1.33.1 | 25 Jul 24 10:27 PDT |          |
	|         | -p download-only-493000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:27:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:27:54.232158    1696 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:27:54.232311    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:27:54.232314    1696 out.go:304] Setting ErrFile to fd 2...
	I0725 10:27:54.232316    1696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:27:54.232454    1696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	W0725 10:27:54.232549    1696 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19326-1196/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19326-1196/.minikube/config/config.json: no such file or directory
	I0725 10:27:54.233853    1696 out.go:298] Setting JSON to true
	I0725 10:27:54.251118    1696 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1638,"bootTime":1721926836,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:27:54.251191    1696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:27:54.256477    1696 out.go:97] [download-only-493000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:27:54.256610    1696 notify.go:220] Checking for updates...
	W0725 10:27:54.256618    1696 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 10:27:54.260429    1696 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:27:54.263546    1696 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:27:54.268579    1696 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:27:54.271529    1696 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:27:54.274540    1696 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	W0725 10:27:54.280494    1696 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:27:54.280746    1696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:27:54.285515    1696 out.go:97] Using the qemu2 driver based on user configuration
	I0725 10:27:54.285534    1696 start.go:297] selected driver: qemu2
	I0725 10:27:54.285547    1696 start.go:901] validating driver "qemu2" against <nil>
	I0725 10:27:54.285618    1696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:27:54.288481    1696 out.go:169] Automatically selected the socket_vmnet network
	I0725 10:27:54.294168    1696 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0725 10:27:54.294258    1696 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:27:54.294286    1696 cni.go:84] Creating CNI manager for ""
	I0725 10:27:54.294304    1696 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0725 10:27:54.294350    1696 start.go:340] cluster config:
	{Name:download-only-493000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:27:54.299489    1696 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:27:54.303546    1696 out.go:97] Downloading VM boot image ...
	I0725 10:27:54.303564    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/iso/arm64/minikube-v1.33.1-1721690939-19319-arm64.iso
	I0725 10:28:03.296045    1696 out.go:97] Starting "download-only-493000" primary control-plane node in "download-only-493000" cluster
	I0725 10:28:03.296074    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:03.369179    1696 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:03.369185    1696 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:03.369333    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:03.373441    1696 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0725 10:28:03.373453    1696 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:03.453884    1696 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:14.959629    1696 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:14.959790    1696 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:15.656838    1696 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0725 10:28:15.657029    1696 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-493000/config.json ...
	I0725 10:28:15.657048    1696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-493000/config.json: {Name:mkcbe285ca3d49455fafab46dbe6de1c059a254e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:28:15.657291    1696 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0725 10:28:15.657488    1696 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0725 10:28:16.046820    1696 out.go:169] 
	W0725 10:28:16.052974    1696 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60 0x108cd5a60] Decompressors:map[bz2:0x1400000fdd0 gz:0x1400000fdd8 tar:0x1400000fd40 tar.bz2:0x1400000fd70 tar.gz:0x1400000fd80 tar.xz:0x1400000fd90 tar.zst:0x1400000fdc0 tbz2:0x1400000fd70 tgz:0x1400000fd80 txz:0x1400000fd90 tzst:0x1400000fdc0 xz:0x1400000fe00 zip:0x1400000fe20 zst:0x1400000fe08] Getters:map[file:0x1400054c600 http:0x14000754640 https:0x14000754690] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0725 10:28:16.052998    1696 out_reason.go:110] 
	W0725 10:28:16.059853    1696 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 10:28:16.062967    1696 out.go:169] 
	
	
	* The control-plane node download-only-493000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-493000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-493000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (15.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-105000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-105000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (15.297415459s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (15.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-105000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-105000: exit status 85 (78.866917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-493000 | jenkins | v1.33.1 | 25 Jul 24 10:27 PDT |                     |
	|         | -p download-only-493000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-493000        | download-only-493000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only        | download-only-105000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-105000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:28:16
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:28:16.464833    1721 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:28:16.464959    1721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:16.464963    1721 out.go:304] Setting ErrFile to fd 2...
	I0725 10:28:16.464965    1721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:16.465096    1721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:28:16.466147    1721 out.go:298] Setting JSON to true
	I0725 10:28:16.482121    1721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1660,"bootTime":1721926836,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:28:16.482215    1721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:28:16.487335    1721 out.go:97] [download-only-105000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:28:16.487413    1721 notify.go:220] Checking for updates...
	I0725 10:28:16.491280    1721 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:28:16.494298    1721 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:28:16.498283    1721 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:28:16.501225    1721 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:28:16.504293    1721 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	W0725 10:28:16.510242    1721 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:28:16.510373    1721 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:28:16.513245    1721 out.go:97] Using the qemu2 driver based on user configuration
	I0725 10:28:16.513254    1721 start.go:297] selected driver: qemu2
	I0725 10:28:16.513258    1721 start.go:901] validating driver "qemu2" against <nil>
	I0725 10:28:16.513305    1721 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:28:16.516307    1721 out.go:169] Automatically selected the socket_vmnet network
	I0725 10:28:16.521228    1721 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0725 10:28:16.521307    1721 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:28:16.521322    1721 cni.go:84] Creating CNI manager for ""
	I0725 10:28:16.521331    1721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 10:28:16.521336    1721 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 10:28:16.521384    1721 start.go:340] cluster config:
	{Name:download-only-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:28:16.524817    1721 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:16.528251    1721 out.go:97] Starting "download-only-105000" primary control-plane node in "download-only-105000" cluster
	I0725 10:28:16.528259    1721 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:28:16.578132    1721 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 10:28:16.578155    1721 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:16.578311    1721 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:28:16.583360    1721 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0725 10:28:16.583367    1721 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:16.662602    1721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0725 10:28:29.762683    1721 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:29.762844    1721 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:30.307019    1721 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0725 10:28:30.307218    1721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-105000/config.json ...
	I0725 10:28:30.307234    1721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-105000/config.json: {Name:mkc469bea30852776a1a3c1fe9b689c2389f2f8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:28:30.307490    1721 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0725 10:28:30.307615    1721 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-105000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-105000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-105000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-826000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (12.554987666s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-826000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-826000: exit status 85 (76.570666ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-493000 | jenkins | v1.33.1 | 25 Jul 24 10:27 PDT |                     |
	|         | -p download-only-493000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-493000             | download-only-493000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only             | download-only-105000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-105000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| delete  | -p download-only-105000             | download-only-105000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT | 25 Jul 24 10:28 PDT |
	| start   | -o=json --download-only             | download-only-826000 | jenkins | v1.33.1 | 25 Jul 24 10:28 PDT |                     |
	|         | -p download-only-826000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 10:28:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 10:28:32.041110    1743 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:28:32.041239    1743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:32.041242    1743 out.go:304] Setting ErrFile to fd 2...
	I0725 10:28:32.041245    1743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:28:32.041366    1743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:28:32.042429    1743 out.go:298] Setting JSON to true
	I0725 10:28:32.058316    1743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1676,"bootTime":1721926836,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:28:32.058395    1743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:28:32.063364    1743 out.go:97] [download-only-826000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:28:32.063449    1743 notify.go:220] Checking for updates...
	I0725 10:28:32.066887    1743 out.go:169] MINIKUBE_LOCATION=19326
	I0725 10:28:32.070972    1743 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:28:32.073937    1743 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:28:32.076914    1743 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:28:32.079965    1743 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	W0725 10:28:32.083938    1743 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 10:28:32.084116    1743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:28:32.086860    1743 out.go:97] Using the qemu2 driver based on user configuration
	I0725 10:28:32.086870    1743 start.go:297] selected driver: qemu2
	I0725 10:28:32.086875    1743 start.go:901] validating driver "qemu2" against <nil>
	I0725 10:28:32.086922    1743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 10:28:32.089871    1743 out.go:169] Automatically selected the socket_vmnet network
	I0725 10:28:32.094973    1743 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0725 10:28:32.095066    1743 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 10:28:32.095084    1743 cni.go:84] Creating CNI manager for ""
	I0725 10:28:32.095091    1743 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0725 10:28:32.095101    1743 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0725 10:28:32.095141    1743 start.go:340] cluster config:
	{Name:download-only-826000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-826000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:28:32.098533    1743 iso.go:125] acquiring lock: {Name:mka6dfcbb8531498e57093fac2f872b00100a3be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 10:28:32.101893    1743 out.go:97] Starting "download-only-826000" primary control-plane node in "download-only-826000" cluster
	I0725 10:28:32.101900    1743 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:32.154136    1743 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:32.154151    1743 cache.go:56] Caching tarball of preloaded images
	I0725 10:28:32.154317    1743 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:32.159323    1743 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0725 10:28:32.159331    1743 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:32.238909    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0725 10:28:41.002936    1743 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:41.003090    1743 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0725 10:28:41.523073    1743 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0725 10:28:41.523320    1743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-826000/config.json ...
	I0725 10:28:41.523336    1743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/download-only-826000/config.json: {Name:mk12b9eb88e9537e6fab4aa9edb97df70731d277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 10:28:41.523571    1743 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0725 10:28:41.523688    1743 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19326-1196/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-826000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-826000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-826000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-014000 --alsologtostderr --binary-mirror http://127.0.0.1:49327 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-014000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-014000
--- PASS: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-076000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-076000: exit status 85 (53.61725ms)

                                                
                                                
-- stdout --
	* Profile "addons-076000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-076000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-076000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-076000: exit status 85 (57.704584ms)

                                                
                                                
-- stdout --
	* Profile "addons-076000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-076000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (204.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-076000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-076000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m24.810378083s)
--- PASS: TestAddons/Setup (204.81s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 7.866334ms
addons_test.go:913: volcano-controller stabilized in 7.90125ms
addons_test.go:897: volcano-scheduler stabilized in 7.914334ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-pf6bl" [c568177f-7320-4fe6-88bc-c685595d1600] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004164042s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-g7rrm" [7ebbd16e-af1a-4874-b026-fd4cd8cb13b5] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003625792s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-jsk8c" [ffc9c93d-13fd-47f9-9454-a98f5d5b35b9] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003720417s
addons_test.go:932: (dbg) Run:  kubectl --context addons-076000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-076000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-076000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3ef1ba21-660e-42a6-935e-97d306a5eb20] Pending
helpers_test.go:344: "test-job-nginx-0" [3ef1ba21-660e-42a6-935e-97d306a5eb20] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3ef1ba21-660e-42a6-935e-97d306a5eb20] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004070042s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable volcano --alsologtostderr -v=1: (9.73624025s)
--- PASS: TestAddons/serial/Volcano (37.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-076000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-076000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.115916ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-p7z8x" [95f5b761-e09a-4a1f-aae1-e7975e043aeb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003533917s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hxgvz" [33b7527c-818d-4e1d-a31b-2ad3760463c2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003670542s
addons_test.go:342: (dbg) Run:  kubectl --context addons-076000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-076000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-076000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.252500709s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 ip
2024/07/25 10:33:17 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-076000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-076000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-076000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e190f25f-b09d-4ec1-9117-fb8f3f19f013] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e190f25f-b09d-4ec1-9117-fb8f3f19f013] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003660666s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-076000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable ingress --alsologtostderr -v=1: (7.189750375s)
--- PASS: TestAddons/parallel/Ingress (18.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6s8hz" [d8b03cd3-f628-43a3-8602-21eec05ec096] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004372208s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-076000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-076000: (5.222606792s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.3345ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-g8zgc" [e9db3658-124d-40fc-aa84-2234a9b25b5e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003819708s
addons_test.go:417: (dbg) Run:  kubectl --context addons-076000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 3.22425ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-076000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-076000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a0ada99a-b950-44d8-bbb7-6894e9fd2689] Pending
helpers_test.go:344: "task-pv-pod" [a0ada99a-b950-44d8-bbb7-6894e9fd2689] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a0ada99a-b950-44d8-bbb7-6894e9fd2689] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003867333s
addons_test.go:590: (dbg) Run:  kubectl --context addons-076000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-076000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-076000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-076000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-076000 delete pod task-pv-pod: (1.088076125s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-076000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-076000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-076000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5574dbcc-8dc3-4f76-8996-61dca1a5c155] Pending
helpers_test.go:344: "task-pv-pod-restore" [5574dbcc-8dc3-4f76-8996-61dca1a5c155] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5574dbcc-8dc3-4f76-8996-61dca1a5c155] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003813833s
addons_test.go:632: (dbg) Run:  kubectl --context addons-076000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-076000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-076000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.106021625s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-076000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-qn6n6" [32aed771-6564-4209-850d-d9dade2d2caa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-qn6n6" [32aed771-6564-4209-850d-d9dade2d2caa] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003995875s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable headlamp --alsologtostderr -v=1: (5.195354s)
--- PASS: TestAddons/parallel/Headlamp (16.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-zv7xh" [601c1518-88f9-450d-a1d0-e8d241b0b5df] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004026834s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-076000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-076000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-076000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [699735eb-1b7f-4819-bc59-6ed572bec69f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [699735eb-1b7f-4819-bc59-6ed572bec69f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [699735eb-1b7f-4819-bc59-6ed572bec69f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003708416s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-076000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 ssh "cat /opt/local-path-provisioner/pvc-43a6bd48-a7b7-4037-b04c-efc9bf72dc22_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-076000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-076000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.338392625s)
--- PASS: TestAddons/parallel/LocalPath (40.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tv7sh" [e4d7bece-d159-441a-861f-f8eff74e217b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00385275s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-076000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-pk4vb" [6931c430-92cd-4622-8814-4c9989d73be3] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003625666s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-076000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-076000 addons disable yakd --alsologtostderr -v=1: (5.198059292s)
--- PASS: TestAddons/parallel/Yakd (10.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-076000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-076000: (12.199673833s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-076000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-076000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-076000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.2s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.20s)

                                                
                                    
x
+
TestErrorSpam/setup (34.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-415000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-415000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 --driver=qemu2 : (34.692123041s)
--- PASS: TestErrorSpam/setup (34.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 unpause
--- PASS: TestErrorSpam/unpause (0.61s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop: (12.196158667s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop: (26.060128833s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-415000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-415000 stop: (26.032143958s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19326-1196/.minikube/files/etc/test/nested/copy/1694/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0725 10:37:10.285274    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.292054    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.304095    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.326161    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.368206    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.450302    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.612377    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:10.934445    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:11.576584    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:12.858702    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:15.420769    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:20.542785    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-963000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.254402083s)
--- PASS: TestFunctional/serial/StartWithProxy (51.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --alsologtostderr -v=8
E0725 10:37:30.784798    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:37:51.266506    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-963000 --alsologtostderr -v=8: (1m2.164167584s)
functional_test.go:659: soft start took 1m2.164526875s for "functional-963000" cluster.
--- PASS: TestFunctional/serial/SoftStart (62.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-963000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2476693193/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache add minikube-local-cache-test:functional-963000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache delete minikube-local-cache-test:functional-963000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-963000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.759958ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 kubectl -- --context functional-963000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-963000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0725 10:38:32.227775    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-963000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.777878084s)
functional_test.go:757: restart took 38.777988375s for "functional-963000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-963000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd599618064/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-963000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-963000: exit status 115 (103.700458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32046 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-963000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-963000 delete -f testdata/invalidsvc.yaml: (1.182772s)
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 config get cpus: exit status 14 (30.314291ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 config get cpus: exit status 14 (30.059208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-963000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-963000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2647: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.829208ms)

                                                
                                                
-- stdout --
	* [functional-963000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:39:56.540546    2627 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:39:56.540670    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.540673    2627 out.go:304] Setting ErrFile to fd 2...
	I0725 10:39:56.540675    2627 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.540814    2627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:39:56.541800    2627 out.go:298] Setting JSON to false
	I0725 10:39:56.558424    2627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2360,"bootTime":1721926836,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:39:56.558496    2627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:39:56.562301    2627 out.go:177] * [functional-963000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0725 10:39:56.569223    2627 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:39:56.569298    2627 notify.go:220] Checking for updates...
	I0725 10:39:56.576185    2627 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:39:56.579296    2627 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:39:56.582196    2627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:39:56.585251    2627 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 10:39:56.588232    2627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:39:56.589852    2627 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:39:56.590110    2627 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:39:56.594198    2627 out.go:177] * Using the qemu2 driver based on existing profile
	I0725 10:39:56.601091    2627 start.go:297] selected driver: qemu2
	I0725 10:39:56.601098    2627 start.go:901] validating driver "qemu2" against &{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:39:56.601152    2627 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:39:56.606243    2627 out.go:177] 
	W0725 10:39:56.610213    2627 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 10:39:56.614208    2627 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.602542ms)

                                                
                                                
-- stdout --
	* [functional-963000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 10:39:56.427226    2623 out.go:291] Setting OutFile to fd 1 ...
	I0725 10:39:56.427333    2623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.427336    2623 out.go:304] Setting ErrFile to fd 2...
	I0725 10:39:56.427338    2623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 10:39:56.427465    2623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
	I0725 10:39:56.428871    2623 out.go:298] Setting JSON to false
	I0725 10:39:56.446206    2623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2360,"bootTime":1721926836,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0725 10:39:56.446292    2623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0725 10:39:56.451298    2623 out.go:177] * [functional-963000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0725 10:39:56.458194    2623 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 10:39:56.458318    2623 notify.go:220] Checking for updates...
	I0725 10:39:56.465182    2623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	I0725 10:39:56.468231    2623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0725 10:39:56.471239    2623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 10:39:56.472526    2623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	I0725 10:39:56.475179    2623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 10:39:56.478525    2623 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0725 10:39:56.478780    2623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 10:39:56.483074    2623 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0725 10:39:56.490231    2623 start.go:297] selected driver: qemu2
	I0725 10:39:56.490238    2623 start.go:901] validating driver "qemu2" against &{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 10:39:56.490287    2623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 10:39:56.496160    2623 out.go:177] 
	W0725 10:39:56.500255    2623 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 10:39:56.504334    2623 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3d30455d-11c5-46b6-9a1e-4cf2cf100ee7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003973333s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-963000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-963000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bbb89b45-b94b-493d-8733-a46e53b1a5e4] Pending
helpers_test.go:344: "sp-pod" [bbb89b45-b94b-493d-8733-a46e53b1a5e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bbb89b45-b94b-493d-8733-a46e53b1a5e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003987959s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-963000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-963000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2cb6542e-ef5f-41c1-ac26-f756c74223a2] Pending
helpers_test.go:344: "sp-pod" [2cb6542e-ef5f-41c1-ac26-f756c74223a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2cb6542e-ef5f-41c1-ac26-f756c74223a2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00370525s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-963000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh -n functional-963000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cp functional-963000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2703945037/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh -n functional-963000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh -n functional-963000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1694/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /etc/test/nested/copy/1694/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1694.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/1694.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1694.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /usr/share/ca-certificates/1694.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16942.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/16942.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16942.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /usr/share/ca-certificates/16942.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-963000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "sudo systemctl is-active crio": exit status 1 (90.225041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2479: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ff1f6520-2f4f-42a9-9d19-6ec9bb5af835] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ff1f6520-2f4f-42a9-9d19-6ec9bb5af835] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00262925s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-963000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.29.119 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-963000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-963000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-963000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-kfp9f" [5560f8f4-890d-456c-a1fb-42d0dacaaca7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-kfp9f" [5560f8f4-890d-456c-a1fb-42d0dacaaca7] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003739917s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service list -o json
functional_test.go:1490: Took "278.026375ms" to run "out/minikube-darwin-arm64 -p functional-963000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:32420
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:32420
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "85.482ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "32.719209ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "85.735042ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.431834ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1727737466/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721929188411197000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1727737466/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721929188411197000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1727737466/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721929188411197000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1727737466/001/test-1721929188411197000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 17:39 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 17:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 17:39 test-1721929188411197000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh cat /mount-9p/test-1721929188411197000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-963000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4dbbd253-d660-4511-84cd-3f6ffeb3912b] Pending
helpers_test.go:344: "busybox-mount" [4dbbd253-d660-4511-84cd-3f6ffeb3912b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4dbbd253-d660-4511-84cd-3f6ffeb3912b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4dbbd253-d660-4511-84cd-3f6ffeb3912b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004256875s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-963000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1727737466/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1160503561/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1160503561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "sudo umount -f /mount-9p": exit status 1 (61.23925ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-963000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1160503561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount1: exit status 1 (75.5525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0725 10:39:54.146036    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount3: exit status 1 (56.758417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-963000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-963000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup222852586/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-963000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-963000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-963000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-963000 image ls --format short --alsologtostderr:
I0725 10:40:04.243022    2773 out.go:291] Setting OutFile to fd 1 ...
I0725 10:40:04.243444    2773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.243449    2773 out.go:304] Setting ErrFile to fd 2...
I0725 10:40:04.243452    2773 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.243628    2773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:40:04.244416    2773 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.244491    2773 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.245341    2773 ssh_runner.go:195] Run: systemctl --version
I0725 10:40:04.245350    2773 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/functional-963000/id_rsa Username:docker}
I0725 10:40:04.272030    2773 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-963000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-963000 | 03c17e4699fca | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-963000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-963000 image ls --format table --alsologtostderr:
I0725 10:40:04.771489    2784 out.go:291] Setting OutFile to fd 1 ...
I0725 10:40:04.771649    2784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.771652    2784 out.go:304] Setting ErrFile to fd 2...
I0725 10:40:04.771655    2784 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.771775    2784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:40:04.772180    2784 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.772243    2784 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.773050    2784 ssh_runner.go:195] Run: systemctl --version
I0725 10:40:04.773062    2784 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/functional-963000/id_rsa Username:docker}
I0725 10:40:04.799441    2784 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-963000 image ls --format json --alsologtostderr:
[{"id":"03c17e4699fca7d2ad28271375303b08d2b3fae16323a4e0838277a0067c8651","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-963000"],"size":"30"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688
217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-963000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["regi
stry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"s
ize":"42300000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-963000 image ls --format json --alsologtostderr:
I0725 10:40:04.699443    2782 out.go:291] Setting OutFile to fd 1 ...
I0725 10:40:04.699579    2782 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.699583    2782 out.go:304] Setting ErrFile to fd 2...
I0725 10:40:04.699585    2782 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.699703    2782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:40:04.700136    2782 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.700198    2782 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.700963    2782 ssh_runner.go:195] Run: systemctl --version
I0725 10:40:04.700972    2782 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/functional-963000/id_rsa Username:docker}
I0725 10:40:04.729014    2782 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-963000 image ls --format yaml --alsologtostderr:
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-963000
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 03c17e4699fca7d2ad28271375303b08d2b3fae16323a4e0838277a0067c8651
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-963000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-963000 image ls --format yaml --alsologtostderr:
I0725 10:40:04.316760    2775 out.go:291] Setting OutFile to fd 1 ...
I0725 10:40:04.316928    2775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.316934    2775 out.go:304] Setting ErrFile to fd 2...
I0725 10:40:04.316937    2775 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.317081    2775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:40:04.317507    2775 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.317571    2775 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.318443    2775 ssh_runner.go:195] Run: systemctl --version
I0725 10:40:04.318450    2775 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/functional-963000/id_rsa Username:docker}
I0725 10:40:04.345437    2775 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-963000 ssh pgrep buildkitd: exit status 1 (63.567958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr
2024/07/25 10:40:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr: (1.476930041s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr:
I0725 10:40:04.455589    2779 out.go:291] Setting OutFile to fd 1 ...
I0725 10:40:04.457791    2779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.457798    2779 out.go:304] Setting ErrFile to fd 2...
I0725 10:40:04.457801    2779 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 10:40:04.457989    2779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19326-1196/.minikube/bin
I0725 10:40:04.458520    2779 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.459409    2779 config.go:182] Loaded profile config "functional-963000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0725 10:40:04.460328    2779 ssh_runner.go:195] Run: systemctl --version
I0725 10:40:04.460337    2779 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19326-1196/.minikube/machines/functional-963000/id_rsa Username:docker}
I0725 10:40:04.487029    2779 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3911297312.tar
I0725 10:40:04.487115    2779 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0725 10:40:04.490799    2779 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3911297312.tar
I0725 10:40:04.492378    2779 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3911297312.tar: stat -c "%s %y" /var/lib/minikube/build/build.3911297312.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3911297312.tar': No such file or directory
I0725 10:40:04.492396    2779 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3911297312.tar --> /var/lib/minikube/build/build.3911297312.tar (3072 bytes)
I0725 10:40:04.501644    2779 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3911297312
I0725 10:40:04.505313    2779 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3911297312 -xf /var/lib/minikube/build/build.3911297312.tar
I0725 10:40:04.508914    2779 docker.go:360] Building image: /var/lib/minikube/build/build.3911297312
I0725 10:40:04.508954    2779 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-963000 /var/lib/minikube/build/build.3911297312
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:368717c2a7f19f63da4d701be156fe36ac2f87cd703e02176131425fe6092b2f done
#8 naming to localhost/my-image:functional-963000 done
#8 DONE 0.0s
I0725 10:40:05.888731    2779 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-963000 /var/lib/minikube/build/build.3911297312: (1.379797834s)
I0725 10:40:05.888797    2779 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3911297312
I0725 10:40:05.893225    2779 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3911297312.tar
I0725 10:40:05.896416    2779 build_images.go:217] Built localhost/my-image:functional-963000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.3911297312.tar
I0725 10:40:05.896434    2779 build_images.go:133] succeeded building to: functional-963000
I0725 10:40:05.896439    2779 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.816770625s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-963000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image save docker.io/kicbase/echo-server:functional-963000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image rm docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-963000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 image save --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-963000 docker-env) && out/minikube-darwin-arm64 status -p functional-963000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-963000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-963000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-963000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-603000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0725 10:42:10.277706    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
E0725 10:42:37.984166    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/addons-076000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-603000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m23.895346667s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-603000 -- rollout status deployment/busybox: (4.337705125s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-7f255 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-m85vn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-qsj2n -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-7f255 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-m85vn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-qsj2n -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-7f255 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-m85vn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-qsj2n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-7f255 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-7f255 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-m85vn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-m85vn -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-qsj2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-603000 -- exec busybox-fc5497c4f-qsj2n -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (86.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-603000 -v=7 --alsologtostderr
E0725 10:44:15.296788    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.303147    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.314308    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.336494    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.378599    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.460702    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.622824    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:15.944980    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:16.587121    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:17.867376    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:20.429490    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:25.551537    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:35.791582    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0725 10:44:56.269926    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-603000 -v=7 --alsologtostderr: (1m26.255399s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (86.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-603000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp testdata/cp-test.txt ha-603000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2930979871/001/cp-test_ha-603000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000:/home/docker/cp-test.txt ha-603000-m02:/home/docker/cp-test_ha-603000_ha-603000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test_ha-603000_ha-603000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000:/home/docker/cp-test.txt ha-603000-m03:/home/docker/cp-test_ha-603000_ha-603000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test_ha-603000_ha-603000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000:/home/docker/cp-test.txt ha-603000-m04:/home/docker/cp-test_ha-603000_ha-603000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test_ha-603000_ha-603000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp testdata/cp-test.txt ha-603000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2930979871/001/cp-test_ha-603000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m02:/home/docker/cp-test.txt ha-603000:/home/docker/cp-test_ha-603000-m02_ha-603000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test_ha-603000-m02_ha-603000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m02:/home/docker/cp-test.txt ha-603000-m03:/home/docker/cp-test_ha-603000-m02_ha-603000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test_ha-603000-m02_ha-603000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m02:/home/docker/cp-test.txt ha-603000-m04:/home/docker/cp-test_ha-603000-m02_ha-603000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test_ha-603000-m02_ha-603000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp testdata/cp-test.txt ha-603000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2930979871/001/cp-test_ha-603000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m03:/home/docker/cp-test.txt ha-603000:/home/docker/cp-test_ha-603000-m03_ha-603000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test_ha-603000-m03_ha-603000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m03:/home/docker/cp-test.txt ha-603000-m02:/home/docker/cp-test_ha-603000-m03_ha-603000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test_ha-603000-m03_ha-603000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m03:/home/docker/cp-test.txt ha-603000-m04:/home/docker/cp-test_ha-603000-m03_ha-603000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test_ha-603000-m03_ha-603000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp testdata/cp-test.txt ha-603000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2930979871/001/cp-test_ha-603000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m04:/home/docker/cp-test.txt ha-603000:/home/docker/cp-test_ha-603000-m04_ha-603000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000 "sudo cat /home/docker/cp-test_ha-603000-m04_ha-603000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m04:/home/docker/cp-test.txt ha-603000-m02:/home/docker/cp-test_ha-603000-m04_ha-603000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m02 "sudo cat /home/docker/cp-test_ha-603000-m04_ha-603000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 cp ha-603000-m04:/home/docker/cp-test.txt ha-603000-m03:/home/docker/cp-test_ha-603000-m04_ha-603000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-603000 ssh -n ha-603000-m03 "sudo cat /home/docker/cp-test_ha-603000-m04_ha-603000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0725 10:54:15.277775    1694 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19326-1196/.minikube/profiles/functional-963000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.708114083s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (78.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-180000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-180000 --output=json --user=testUser: (1.871945583s)
--- PASS: TestJSONOutput/stop/Command (1.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-189000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-189000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.77225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db4d616e-89f5-4673-b27e-3c03efdc3071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-189000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3553085-717a-4f6d-bf68-f2920f0f9b5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"6f56330a-850c-41bd-be5a-bd7565c5333d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig"}}
	{"specversion":"1.0","id":"7eab7df0-fc6b-41bb-9ea2-7e2b7da4d4c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"771eae2b-1bae-4fd7-9ec1-13426df97272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c08b7c9c-45b5-4fd3-84a0-f0e460ccd469","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube"}}
	{"specversion":"1.0","id":"05f66098-b9e3-4ac0-b429-8b7f4bbc86ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"01548e81-fa87-46ac-a523-1bd2614aa879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-189000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-007000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (97.790458ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-007000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19326-1196/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19326-1196/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.547875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-007000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.623327541s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.737629s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-007000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-007000: (2.8184905s)
--- PASS: TestNoKubernetes/serial/Stop (2.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-007000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (40.5615ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-007000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-007000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-820000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-309000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-309000 --alsologtostderr -v=3: (1.91043225s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-309000 -n old-k8s-version-309000: exit status 7 (44.957375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-309000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-422000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-422000 --alsologtostderr -v=3: (1.988945959s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-422000 -n no-preload-422000: exit status 7 (56.620334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-422000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-205000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-205000 --alsologtostderr -v=3: (1.778456708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-205000 -n embed-certs-205000: exit status 7 (55.303167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-205000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-986000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-986000 --alsologtostderr -v=3: (2.111107709s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (54.267375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-986000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-471000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-471000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-471000 --alsologtostderr -v=3: (3.276767167s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-471000 -n newest-cni-471000: exit status 7 (55.510958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-471000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/282)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-411000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-411000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-411000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-411000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-411000"

                                                
                                                
----------------------- debugLogs end: cilium-411000 [took: 2.158189292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-411000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-411000
--- SKIP: TestNetworkPlugins/group/cilium (2.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-099000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-099000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard