Test Report: QEMU_macOS 19356

                    
                      904aab08df45b60a074395618a72550fbda0cd8b:2024-07-31:35586
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.59
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.02
55 TestCertOptions 12.1
56 TestCertExpiration 197.42
57 TestDockerFlags 12.63
58 TestForceSystemdFlag 12.25
59 TestForceSystemdEnv 10.33
104 TestFunctional/parallel/ServiceCmdConnect 34.66
176 TestMultiControlPlane/serial/StopSecondaryNode 312.31
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.13
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.24
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 329.57
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 218.29
186 TestImageBuild/serial/Setup 10.22
189 TestJSONOutput/start/Command 9.77
195 TestJSONOutput/pause/Command 0.08
201 TestJSONOutput/unpause/Command 0.04
218 TestMinikubeProfile 10.02
221 TestMountStart/serial/StartWithMountFirst 10.04
224 TestMultiNode/serial/FreshStart2Nodes 9.87
225 TestMultiNode/serial/DeployApp2Nodes 107.09
226 TestMultiNode/serial/PingHostFrom2Pods 0.08
227 TestMultiNode/serial/AddNode 0.07
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.08
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.14
232 TestMultiNode/serial/StartAfterStop 40.91
233 TestMultiNode/serial/RestartKeepsNodes 8.54
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 2.75
236 TestMultiNode/serial/RestartMultiNode 5.26
237 TestMultiNode/serial/ValidateNameConflict 20.06
241 TestPreload 10.06
243 TestScheduledStopUnix 10.1
244 TestSkaffold 12.35
247 TestRunningBinaryUpgrade 602
249 TestKubernetesUpgrade 18.48
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.52
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.97
265 TestStoppedBinaryUpgrade/Upgrade 579.86
267 TestPause/serial/Start 9.82
277 TestNoKubernetes/serial/StartWithK8s 10.12
278 TestNoKubernetes/serial/StartWithStopK8s 5.27
279 TestNoKubernetes/serial/Start 5.3
283 TestNoKubernetes/serial/StartNoArgs 5.29
285 TestNetworkPlugins/group/auto/Start 9.92
286 TestNetworkPlugins/group/kindnet/Start 9.77
287 TestNetworkPlugins/group/calico/Start 9.89
288 TestNetworkPlugins/group/custom-flannel/Start 9.82
289 TestNetworkPlugins/group/false/Start 9.75
290 TestNetworkPlugins/group/enable-default-cni/Start 9.94
291 TestNetworkPlugins/group/flannel/Start 9.85
292 TestNetworkPlugins/group/bridge/Start 9.8
293 TestNetworkPlugins/group/kubenet/Start 9.86
295 TestStartStop/group/old-k8s-version/serial/FirstStart 10.03
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 9.82
308 TestStartStop/group/no-preload/serial/DeployApp 0.09
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
312 TestStartStop/group/no-preload/serial/SecondStart 5.26
314 TestStartStop/group/embed-certs/serial/FirstStart 10.31
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
318 TestStartStop/group/no-preload/serial/Pause 0.1
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.91
321 TestStartStop/group/embed-certs/serial/DeployApp 0.09
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
325 TestStartStop/group/embed-certs/serial/SecondStart 5.72
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.25
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/embed-certs/serial/Pause 0.1
336 TestStartStop/group/newest-cni/serial/FirstStart 9.94
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
345 TestStartStop/group/newest-cni/serial/SecondStart 5.25
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (11.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-382000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-382000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (11.593463708s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a8a1210b-89eb-4a1f-9ba4-95c88ba25f5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-382000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3a4f4cd-b86b-49ef-8b73-b6a7dbc2a7e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19356"}}
	{"specversion":"1.0","id":"843c95bb-d7ab-41c5-8c3f-11df82b831b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig"}}
	{"specversion":"1.0","id":"e872fd04-e4c5-4b0b-9549-13b86de4d98f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6d7ee698-5898-4222-9682-b9e8a41cf9dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7195fe4c-dff7-4023-bbff-dbd55cddad64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube"}}
	{"specversion":"1.0","id":"08aa1bf4-ebeb-46ef-b9c3-77865b168e51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"3c170194-b0d3-4295-8554-61baf27cb79b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"43dca70c-e0a3-41b6-a07a-c6dd8e8f4415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"db0aec7d-734c-4c61-9291-372bfe4b8e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0962832a-6914-49a5-b38a-ae232598745b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-382000\" primary control-plane node in \"download-only-382000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"876d6d35-3880-4d31-a5a9-e728966d07c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb9d2b46-be55-455a-b63a-d9af7a607380","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60] Decompressors:map[bz2:0x1400081adb0 gz:0x1400081adb8 tar:0x1400081ad60 tar.bz2:0x1400081ad70 tar.gz:0x1400081ad80 tar.xz:0x1400081ad90 tar.zst:0x1400081ada0 tbz2:0x1400081ad70 tgz:0x14
00081ad80 txz:0x1400081ad90 tzst:0x1400081ada0 xz:0x1400081adc0 zip:0x1400081add0 zst:0x1400081adc8] Getters:map[file:0x140007d8550 http:0x1400088c320 https:0x1400088c370] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"261dbeec-a480-40e3-bb69-547ef169ea4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:14:00.365281    1705 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:14:00.365419    1705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:00.365423    1705 out.go:304] Setting ErrFile to fd 2...
	I0731 11:14:00.365425    1705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:00.365551    1705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	W0731 11:14:00.365636    1705 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19356-1202/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19356-1202/.minikube/config/config.json: no such file or directory
	I0731 11:14:00.366845    1705 out.go:298] Setting JSON to true
	I0731 11:14:00.384067    1705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":809,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:14:00.384130    1705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:14:00.390530    1705 out.go:97] [download-only-382000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:14:00.390666    1705 notify.go:220] Checking for updates...
	W0731 11:14:00.390730    1705 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 11:14:00.394452    1705 out.go:169] MINIKUBE_LOCATION=19356
	I0731 11:14:00.397499    1705 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:14:00.402476    1705 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:14:00.405516    1705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:14:00.408443    1705 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	W0731 11:14:00.414514    1705 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:14:00.414748    1705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:14:00.420510    1705 out.go:97] Using the qemu2 driver based on user configuration
	I0731 11:14:00.420529    1705 start.go:297] selected driver: qemu2
	I0731 11:14:00.420543    1705 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:14:00.420621    1705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:14:00.424476    1705 out.go:169] Automatically selected the socket_vmnet network
	I0731 11:14:00.430271    1705 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 11:14:00.430401    1705 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:14:00.430466    1705 cni.go:84] Creating CNI manager for ""
	I0731 11:14:00.430482    1705 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 11:14:00.430540    1705 start.go:340] cluster config:
	{Name:download-only-382000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:14:00.436016    1705 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:14:00.440486    1705 out.go:97] Downloading VM boot image ...
	I0731 11:14:00.440504    1705 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 11:14:05.134704    1705 out.go:97] Starting "download-only-382000" primary control-plane node in "download-only-382000" cluster
	I0731 11:14:05.134729    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:05.212403    1705 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:05.212424    1705 cache.go:56] Caching tarball of preloaded images
	I0731 11:14:05.212614    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:05.217755    1705 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 11:14:05.217763    1705 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:05.293884    1705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:10.734025    1705 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:10.734343    1705 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:11.429638    1705 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 11:14:11.429835    1705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-382000/config.json ...
	I0731 11:14:11.429854    1705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-382000/config.json: {Name:mk1a7121662644079b464b6bc0c63858f6cc49b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:14:11.430087    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:11.430289    1705 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 11:14:11.885864    1705 out.go:169] 
	W0731 11:14:11.891819    1705 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60] Decompressors:map[bz2:0x1400081adb0 gz:0x1400081adb8 tar:0x1400081ad60 tar.bz2:0x1400081ad70 tar.gz:0x1400081ad80 tar.xz:0x1400081ad90 tar.zst:0x1400081ada0 tbz2:0x1400081ad70 tgz:0x1400081ad80 txz:0x1400081ad90 tzst:0x1400081ada0 xz:0x1400081adc0 zip:0x1400081add0 zst:0x1400081adc8] Getters:map[file:0x140007d8550 http:0x1400088c320 https:0x1400088c370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 11:14:11.891844    1705 out_reason.go:110] 
	W0731 11:14:11.898859    1705 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:14:11.902655    1705 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-382000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (11.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-305000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-305000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.868015666s)

                                                
                                                
-- stdout --
	* [offline-docker-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-305000" primary control-plane node in "offline-docker-305000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-305000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:59:10.707006    3853 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:59:10.707140    3853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:10.707143    3853 out.go:304] Setting ErrFile to fd 2...
	I0731 11:59:10.707145    3853 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:10.707271    3853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:59:10.708336    3853 out.go:298] Setting JSON to false
	I0731 11:59:10.725710    3853 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3519,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:59:10.725777    3853 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:59:10.730325    3853 out.go:177] * [offline-docker-305000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:59:10.738292    3853 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:59:10.738356    3853 notify.go:220] Checking for updates...
	I0731 11:59:10.743195    3853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:59:10.746396    3853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:59:10.749242    3853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:59:10.752225    3853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:59:10.755240    3853 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:59:10.758583    3853 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:10.758657    3853 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:59:10.762243    3853 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:59:10.769309    3853 start.go:297] selected driver: qemu2
	I0731 11:59:10.769325    3853 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:59:10.769333    3853 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:59:10.771180    3853 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:59:10.774228    3853 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:59:10.777294    3853 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:59:10.777312    3853 cni.go:84] Creating CNI manager for ""
	I0731 11:59:10.777320    3853 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:59:10.777324    3853 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:59:10.777354    3853 start.go:340] cluster config:
	{Name:offline-docker-305000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:59:10.781014    3853 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:59:10.786173    3853 out.go:177] * Starting "offline-docker-305000" primary control-plane node in "offline-docker-305000" cluster
	I0731 11:59:10.790182    3853 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:59:10.790205    3853 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:59:10.790216    3853 cache.go:56] Caching tarball of preloaded images
	I0731 11:59:10.790279    3853 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:59:10.790284    3853 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:59:10.790345    3853 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/offline-docker-305000/config.json ...
	I0731 11:59:10.790361    3853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/offline-docker-305000/config.json: {Name:mk8d5dccc326c1171ae57c422b52d552619ff126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:10.790676    3853 start.go:360] acquireMachinesLock for offline-docker-305000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:10.790713    3853 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "offline-docker-305000"
	I0731 11:59:10.790722    3853 start.go:93] Provisioning new machine with config: &{Name:offline-docker-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:10.790764    3853 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:10.795214    3853 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:10.810868    3853 start.go:159] libmachine.API.Create for "offline-docker-305000" (driver="qemu2")
	I0731 11:59:10.810904    3853 client.go:168] LocalClient.Create starting
	I0731 11:59:10.810982    3853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:10.811011    3853 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:10.811023    3853 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:10.811067    3853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:10.811090    3853 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:10.811097    3853 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:10.811436    3853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:10.959015    3853 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:11.106011    3853 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:11.106019    3853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:11.106259    3853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:11.122588    3853 main.go:141] libmachine: STDOUT: 
	I0731 11:59:11.122614    3853 main.go:141] libmachine: STDERR: 
	I0731 11:59:11.122674    3853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2 +20000M
	I0731 11:59:11.131514    3853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:11.131532    3853 main.go:141] libmachine: STDERR: 
	I0731 11:59:11.131560    3853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:11.131564    3853 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:11.131577    3853 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:11.131606    3853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:70:b9:d4:3a:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:11.133480    3853 main.go:141] libmachine: STDOUT: 
	I0731 11:59:11.133499    3853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:11.133520    3853 client.go:171] duration metric: took 322.617625ms to LocalClient.Create
	I0731 11:59:13.135654    3853 start.go:128] duration metric: took 2.344924625s to createHost
	I0731 11:59:13.135687    3853 start.go:83] releasing machines lock for "offline-docker-305000", held for 2.345021375s
	W0731 11:59:13.135711    3853 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:13.149544    3853 out.go:177] * Deleting "offline-docker-305000" in qemu2 ...
	W0731 11:59:13.166236    3853 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:13.166248    3853 start.go:729] Will try again in 5 seconds ...
	I0731 11:59:18.168251    3853 start.go:360] acquireMachinesLock for offline-docker-305000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:18.168657    3853 start.go:364] duration metric: took 264.792µs to acquireMachinesLock for "offline-docker-305000"
	I0731 11:59:18.168794    3853 start.go:93] Provisioning new machine with config: &{Name:offline-docker-305000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-305000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:18.169245    3853 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:18.180774    3853 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:18.225904    3853 start.go:159] libmachine.API.Create for "offline-docker-305000" (driver="qemu2")
	I0731 11:59:18.225945    3853 client.go:168] LocalClient.Create starting
	I0731 11:59:18.226053    3853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:18.226112    3853 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:18.226126    3853 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:18.226189    3853 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:18.226240    3853 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:18.226251    3853 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:18.226711    3853 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:18.381656    3853 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:18.482902    3853 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:18.482909    3853 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:18.483145    3853 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:18.492461    3853 main.go:141] libmachine: STDOUT: 
	I0731 11:59:18.492480    3853 main.go:141] libmachine: STDERR: 
	I0731 11:59:18.492525    3853 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2 +20000M
	I0731 11:59:18.500440    3853 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:18.500453    3853 main.go:141] libmachine: STDERR: 
	I0731 11:59:18.500467    3853 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:18.500475    3853 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:18.500485    3853 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:18.500513    3853 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6b:30:73:02:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/offline-docker-305000/disk.qcow2
	I0731 11:59:18.502066    3853 main.go:141] libmachine: STDOUT: 
	I0731 11:59:18.502082    3853 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:18.502095    3853 client.go:171] duration metric: took 276.151417ms to LocalClient.Create
	I0731 11:59:20.504262    3853 start.go:128] duration metric: took 2.335030208s to createHost
	I0731 11:59:20.504322    3853 start.go:83] releasing machines lock for "offline-docker-305000", held for 2.335695792s
	W0731 11:59:20.504759    3853 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-305000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:20.513259    3853 out.go:177] 
	W0731 11:59:20.517425    3853 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:59:20.517449    3853 out.go:239] * 
	* 
	W0731 11:59:20.519921    3853 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:59:20.530328    3853 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-305000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-31 11:59:20.546199 -0700 PDT m=+2720.377379960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-305000 -n offline-docker-305000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-305000 -n offline-docker-305000: exit status 7 (65.776875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-305000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-305000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-305000
--- FAIL: TestOffline (10.02s)

                                                
                                    
x
+
TestCertOptions (12.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-939000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-939000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (11.839935375s)

                                                
                                                
-- stdout --
	* [cert-options-939000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-939000" primary control-plane node in "cert-options-939000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-939000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-939000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-939000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-939000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-939000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.005042ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-939000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-939000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-939000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-939000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-939000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-939000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.181083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-939000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-939000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-939000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-939000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-939000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-31 11:59:55.645749 -0700 PDT m=+2755.477691543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-939000 -n cert-options-939000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-939000 -n cert-options-939000: exit status 7 (29.42475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-939000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-939000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-939000
--- FAIL: TestCertOptions (12.10s)

                                                
                                    
x
+
TestCertExpiration (197.42s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.083558125s)

                                                
                                                
-- stdout --
	* [cert-expiration-447000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-447000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.212820291s)

                                                
                                                
-- stdout --
	* [cert-expiration-447000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-447000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-447000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-447000" primary control-plane node in "cert-expiration-447000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-447000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-447000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-31 12:02:58.413773 -0700 PDT m=+2938.215567835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-447000 -n cert-expiration-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-447000 -n cert-expiration-447000: exit status 7 (41.12375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-447000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-447000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-447000
--- FAIL: TestCertExpiration (197.42s)

                                                
                                    
x
+
TestDockerFlags (12.63s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-519000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-519000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.24900475s)

                                                
                                                
-- stdout --
	* [docker-flags-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-519000" primary control-plane node in "docker-flags-519000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-519000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:59:31.053433    4039 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:59:31.053578    4039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:31.053581    4039 out.go:304] Setting ErrFile to fd 2...
	I0731 11:59:31.053584    4039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:31.053739    4039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:59:31.054791    4039 out.go:298] Setting JSON to false
	I0731 11:59:31.071617    4039 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3540,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:59:31.071687    4039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:59:31.096614    4039 out.go:177] * [docker-flags-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:59:31.105964    4039 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:59:31.105979    4039 notify.go:220] Checking for updates...
	I0731 11:59:31.112994    4039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:59:31.116963    4039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:59:31.119978    4039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:59:31.122949    4039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:59:31.125915    4039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:59:31.129366    4039 config.go:182] Loaded profile config "force-systemd-flag-232000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:31.129430    4039 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:31.129473    4039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:59:31.133966    4039 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:59:31.139949    4039 start.go:297] selected driver: qemu2
	I0731 11:59:31.139954    4039 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:59:31.139961    4039 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:59:31.142201    4039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:59:31.144948    4039 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:59:31.148059    4039 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0731 11:59:31.148098    4039 cni.go:84] Creating CNI manager for ""
	I0731 11:59:31.148106    4039 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:59:31.148110    4039 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:59:31.148153    4039 start.go:340] cluster config:
	{Name:docker-flags-519000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:59:31.151950    4039 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:59:31.155980    4039 out.go:177] * Starting "docker-flags-519000" primary control-plane node in "docker-flags-519000" cluster
	I0731 11:59:31.165012    4039 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:59:31.165044    4039 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:59:31.165062    4039 cache.go:56] Caching tarball of preloaded images
	I0731 11:59:31.165138    4039 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:59:31.165149    4039 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:59:31.165199    4039 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/docker-flags-519000/config.json ...
	I0731 11:59:31.165209    4039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/docker-flags-519000/config.json: {Name:mkc51c2b7c52af4eb986a2f1e9e9e5eaeabc9db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:31.165562    4039 start.go:360] acquireMachinesLock for docker-flags-519000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:33.166145    4039 start.go:364] duration metric: took 2.000603666s to acquireMachinesLock for "docker-flags-519000"
	I0731 11:59:33.166259    4039 start.go:93] Provisioning new machine with config: &{Name:docker-flags-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:33.166528    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:33.172031    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:33.220707    4039 start.go:159] libmachine.API.Create for "docker-flags-519000" (driver="qemu2")
	I0731 11:59:33.220746    4039 client.go:168] LocalClient.Create starting
	I0731 11:59:33.220867    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:33.220932    4039 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:33.220949    4039 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:33.221011    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:33.221057    4039 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:33.221074    4039 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:33.221733    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:33.378326    4039 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:33.507020    4039 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:33.507028    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:33.507279    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:33.516673    4039 main.go:141] libmachine: STDOUT: 
	I0731 11:59:33.516690    4039 main.go:141] libmachine: STDERR: 
	I0731 11:59:33.516729    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2 +20000M
	I0731 11:59:33.524603    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:33.524616    4039 main.go:141] libmachine: STDERR: 
	I0731 11:59:33.524627    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:33.524631    4039 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:33.524655    4039 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:33.524682    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ae:f2:cc:0b:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:33.526386    4039 main.go:141] libmachine: STDOUT: 
	I0731 11:59:33.526403    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:33.526424    4039 client.go:171] duration metric: took 305.675ms to LocalClient.Create
	I0731 11:59:35.528650    4039 start.go:128] duration metric: took 2.36212875s to createHost
	I0731 11:59:35.528758    4039 start.go:83] releasing machines lock for "docker-flags-519000", held for 2.362605167s
	W0731 11:59:35.528826    4039 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:35.549144    4039 out.go:177] * Deleting "docker-flags-519000" in qemu2 ...
	W0731 11:59:35.578838    4039 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:35.578869    4039 start.go:729] Will try again in 5 seconds ...
	I0731 11:59:40.580989    4039 start.go:360] acquireMachinesLock for docker-flags-519000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:40.703467    4039 start.go:364] duration metric: took 122.359542ms to acquireMachinesLock for "docker-flags-519000"
	I0731 11:59:40.703603    4039 start.go:93] Provisioning new machine with config: &{Name:docker-flags-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:40.703816    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:40.713123    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:40.762831    4039 start.go:159] libmachine.API.Create for "docker-flags-519000" (driver="qemu2")
	I0731 11:59:40.762878    4039 client.go:168] LocalClient.Create starting
	I0731 11:59:40.762951    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:40.763002    4039 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:40.763020    4039 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:40.763090    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:40.763120    4039 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:40.763135    4039 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:40.763605    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:41.112101    4039 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:41.205798    4039 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:41.205812    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:41.205985    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:41.215432    4039 main.go:141] libmachine: STDOUT: 
	I0731 11:59:41.215452    4039 main.go:141] libmachine: STDERR: 
	I0731 11:59:41.215517    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2 +20000M
	I0731 11:59:41.224110    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:41.224130    4039 main.go:141] libmachine: STDERR: 
	I0731 11:59:41.224139    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:41.224143    4039 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:41.224150    4039 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:41.224179    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:c5:d9:1b:ce:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/docker-flags-519000/disk.qcow2
	I0731 11:59:41.225928    4039 main.go:141] libmachine: STDOUT: 
	I0731 11:59:41.225940    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:41.225953    4039 client.go:171] duration metric: took 463.079667ms to LocalClient.Create
	I0731 11:59:43.228086    4039 start.go:128] duration metric: took 2.524295667s to createHost
	I0731 11:59:43.228140    4039 start.go:83] releasing machines lock for "docker-flags-519000", held for 2.52470175s
	W0731 11:59:43.228502    4039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-519000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-519000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:43.242044    4039 out.go:177] 
	W0731 11:59:43.246106    4039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:59:43.246149    4039 out.go:239] * 
	* 
	W0731 11:59:43.248350    4039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:59:43.258029    4039 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-519000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-519000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-519000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (92.540834ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-519000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-519000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-519000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-519000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-519000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-519000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-519000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-519000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (93.750417ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-519000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-519000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-519000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-519000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-519000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-31 11:59:43.455235 -0700 PDT m=+2743.286913460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-519000 -n docker-flags-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-519000 -n docker-flags-519000: exit status 7 (33.302125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-519000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-519000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-519000
--- FAIL: TestDockerFlags (12.63s)

                                                
                                    
x
+
TestForceSystemdFlag (12.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.906588625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-232000" primary control-plane node in "force-systemd-flag-232000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-232000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:59:28.862458    4025 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:59:28.862599    4025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:28.862602    4025 out.go:304] Setting ErrFile to fd 2...
	I0731 11:59:28.862604    4025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:28.862746    4025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:59:28.863824    4025 out.go:298] Setting JSON to false
	I0731 11:59:28.879649    4025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3537,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:59:28.879712    4025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:59:28.885617    4025 out.go:177] * [force-systemd-flag-232000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:59:28.892758    4025 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:59:28.892815    4025 notify.go:220] Checking for updates...
	I0731 11:59:28.900688    4025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:59:28.904701    4025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:59:28.907797    4025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:59:28.910717    4025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:59:28.913734    4025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:59:28.917003    4025 config.go:182] Loaded profile config "force-systemd-env-715000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:28.917071    4025 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:28.917117    4025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:59:28.920696    4025 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:59:28.927738    4025 start.go:297] selected driver: qemu2
	I0731 11:59:28.927743    4025 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:59:28.927749    4025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:59:28.930046    4025 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:59:28.931327    4025 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:59:28.933830    4025 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:59:28.933867    4025 cni.go:84] Creating CNI manager for ""
	I0731 11:59:28.933875    4025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:59:28.933881    4025 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:59:28.933909    4025 start.go:340] cluster config:
	{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:59:28.937501    4025 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:59:28.944718    4025 out.go:177] * Starting "force-systemd-flag-232000" primary control-plane node in "force-systemd-flag-232000" cluster
	I0731 11:59:28.948700    4025 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:59:28.948714    4025 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:59:28.948723    4025 cache.go:56] Caching tarball of preloaded images
	I0731 11:59:28.948795    4025 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:59:28.948801    4025 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:59:28.948859    4025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/force-systemd-flag-232000/config.json ...
	I0731 11:59:28.948870    4025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/force-systemd-flag-232000/config.json: {Name:mk1452e26c2d9b17d983c06e96aeb594a9bd4fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:28.949234    4025 start.go:360] acquireMachinesLock for force-systemd-flag-232000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:30.664303    4025 start.go:364] duration metric: took 1.715042458s to acquireMachinesLock for "force-systemd-flag-232000"
	I0731 11:59:30.664462    4025 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:30.664653    4025 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:30.674043    4025 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:30.722884    4025 start.go:159] libmachine.API.Create for "force-systemd-flag-232000" (driver="qemu2")
	I0731 11:59:30.722948    4025 client.go:168] LocalClient.Create starting
	I0731 11:59:30.723065    4025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:30.723134    4025 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:30.723149    4025 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:30.723211    4025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:30.723256    4025 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:30.723271    4025 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:30.723890    4025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:31.049766    4025 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:31.137637    4025 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:31.137644    4025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:31.137832    4025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:31.149750    4025 main.go:141] libmachine: STDOUT: 
	I0731 11:59:31.149774    4025 main.go:141] libmachine: STDERR: 
	I0731 11:59:31.149831    4025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2 +20000M
	I0731 11:59:31.161973    4025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:31.161996    4025 main.go:141] libmachine: STDERR: 
	I0731 11:59:31.162019    4025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:31.162027    4025 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:31.162042    4025 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:31.162068    4025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:85:0d:9e:e4:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:31.163730    4025 main.go:141] libmachine: STDOUT: 
	I0731 11:59:31.163750    4025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:31.163771    4025 client.go:171] duration metric: took 440.824958ms to LocalClient.Create
	I0731 11:59:33.165901    4025 start.go:128] duration metric: took 2.501276916s to createHost
	I0731 11:59:33.166013    4025 start.go:83] releasing machines lock for "force-systemd-flag-232000", held for 2.501657583s
	W0731 11:59:33.166082    4025 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:33.179900    4025 out.go:177] * Deleting "force-systemd-flag-232000" in qemu2 ...
	W0731 11:59:33.200575    4025 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:33.200599    4025 start.go:729] Will try again in 5 seconds ...
	I0731 11:59:38.202802    4025 start.go:360] acquireMachinesLock for force-systemd-flag-232000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:38.203285    4025 start.go:364] duration metric: took 369.084µs to acquireMachinesLock for "force-systemd-flag-232000"
	I0731 11:59:38.203434    4025 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-232000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-232000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:38.203689    4025 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:38.223143    4025 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:38.275006    4025 start.go:159] libmachine.API.Create for "force-systemd-flag-232000" (driver="qemu2")
	I0731 11:59:38.275061    4025 client.go:168] LocalClient.Create starting
	I0731 11:59:38.275172    4025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:38.275235    4025 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:38.275250    4025 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:38.275321    4025 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:38.275365    4025 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:38.275376    4025 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:38.275906    4025 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:38.432695    4025 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:38.681477    4025 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:38.681484    4025 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:38.681758    4025 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:38.691380    4025 main.go:141] libmachine: STDOUT: 
	I0731 11:59:38.691412    4025 main.go:141] libmachine: STDERR: 
	I0731 11:59:38.691454    4025 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2 +20000M
	I0731 11:59:38.699393    4025 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:38.699413    4025 main.go:141] libmachine: STDERR: 
	I0731 11:59:38.699424    4025 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:38.699430    4025 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:38.699441    4025 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:38.699472    4025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:43:ca:80:79:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-flag-232000/disk.qcow2
	I0731 11:59:38.701129    4025 main.go:141] libmachine: STDOUT: 
	I0731 11:59:38.701145    4025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:38.701157    4025 client.go:171] duration metric: took 426.100625ms to LocalClient.Create
	I0731 11:59:40.703275    4025 start.go:128] duration metric: took 2.499584417s to createHost
	I0731 11:59:40.703335    4025 start.go:83] releasing machines lock for "force-systemd-flag-232000", held for 2.50007775s
	W0731 11:59:40.703589    4025 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-232000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:40.720213    4025 out.go:177] 
	W0731 11:59:40.724237    4025 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:59:40.724260    4025 out.go:239] * 
	* 
	W0731 11:59:40.726272    4025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:59:40.735142    4025 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-232000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (93.333542ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-232000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-232000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-232000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-31 11:59:40.838506 -0700 PDT m=+2740.670127376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-232000 -n force-systemd-flag-232000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-232000 -n force-systemd-flag-232000: exit status 7 (40.827875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-232000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-232000
--- FAIL: TestForceSystemdFlag (12.25s)

                                                
                                    
x
+
TestForceSystemdEnv (10.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-715000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-715000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.01090475s)

                                                
                                                
-- stdout --
	* [force-systemd-env-715000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-715000" primary control-plane node in "force-systemd-env-715000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-715000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:59:20.720523    3988 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:59:20.720684    3988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:20.720687    3988 out.go:304] Setting ErrFile to fd 2...
	I0731 11:59:20.720689    3988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:59:20.720830    3988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:59:20.721899    3988 out.go:298] Setting JSON to false
	I0731 11:59:20.738322    3988 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3529,"bootTime":1722448831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:59:20.738400    3988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:59:20.744795    3988 out.go:177] * [force-systemd-env-715000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:59:20.752683    3988 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:59:20.752741    3988 notify.go:220] Checking for updates...
	I0731 11:59:20.760774    3988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:59:20.762201    3988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:59:20.765772    3988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:59:20.768849    3988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:59:20.770186    3988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0731 11:59:20.773179    3988 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:59:20.773236    3988 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:59:20.777781    3988 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:59:20.782771    3988 start.go:297] selected driver: qemu2
	I0731 11:59:20.782777    3988 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:59:20.782784    3988 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:59:20.785392    3988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:59:20.789745    3988 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:59:20.791287    3988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:59:20.791332    3988 cni.go:84] Creating CNI manager for ""
	I0731 11:59:20.791342    3988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:59:20.791346    3988 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:59:20.791388    3988 start.go:340] cluster config:
	{Name:force-systemd-env-715000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:59:20.795273    3988 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:59:20.803803    3988 out.go:177] * Starting "force-systemd-env-715000" primary control-plane node in "force-systemd-env-715000" cluster
	I0731 11:59:20.807734    3988 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:59:20.807751    3988 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:59:20.807759    3988 cache.go:56] Caching tarball of preloaded images
	I0731 11:59:20.807826    3988 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:59:20.807832    3988 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:59:20.807887    3988 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/force-systemd-env-715000/config.json ...
	I0731 11:59:20.807899    3988 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/force-systemd-env-715000/config.json: {Name:mk9174b3f1fe6947c809f41cfdc24dfc267b3cd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:59:20.808244    3988 start.go:360] acquireMachinesLock for force-systemd-env-715000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:20.808279    3988 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "force-systemd-env-715000"
	I0731 11:59:20.808294    3988 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:20.808323    3988 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:20.812812    3988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:20.830116    3988 start.go:159] libmachine.API.Create for "force-systemd-env-715000" (driver="qemu2")
	I0731 11:59:20.830146    3988 client.go:168] LocalClient.Create starting
	I0731 11:59:20.830213    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:20.830243    3988 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:20.830252    3988 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:20.830286    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:20.830308    3988 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:20.830316    3988 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:20.830766    3988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:20.976842    3988 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:21.138760    3988 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:21.138770    3988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:21.139014    3988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:21.148379    3988 main.go:141] libmachine: STDOUT: 
	I0731 11:59:21.148397    3988 main.go:141] libmachine: STDERR: 
	I0731 11:59:21.148444    3988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2 +20000M
	I0731 11:59:21.156343    3988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:21.156356    3988 main.go:141] libmachine: STDERR: 
	I0731 11:59:21.156368    3988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:21.156372    3988 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:21.156402    3988 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:21.156426    3988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:e2:f7:82:9c:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:21.158059    3988 main.go:141] libmachine: STDOUT: 
	I0731 11:59:21.158077    3988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:21.158096    3988 client.go:171] duration metric: took 327.949375ms to LocalClient.Create
	I0731 11:59:23.160325    3988 start.go:128] duration metric: took 2.352014583s to createHost
	I0731 11:59:23.160420    3988 start.go:83] releasing machines lock for "force-systemd-env-715000", held for 2.352181459s
	W0731 11:59:23.160476    3988 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:23.174542    3988 out.go:177] * Deleting "force-systemd-env-715000" in qemu2 ...
	W0731 11:59:23.202414    3988 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:23.202439    3988 start.go:729] Will try again in 5 seconds ...
	I0731 11:59:28.202705    3988 start.go:360] acquireMachinesLock for force-systemd-env-715000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:59:28.203143    3988 start.go:364] duration metric: took 354.541µs to acquireMachinesLock for "force-systemd-env-715000"
	I0731 11:59:28.203283    3988 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-715000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:59:28.203551    3988 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:59:28.213119    3988 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 11:59:28.264205    3988 start.go:159] libmachine.API.Create for "force-systemd-env-715000" (driver="qemu2")
	I0731 11:59:28.264250    3988 client.go:168] LocalClient.Create starting
	I0731 11:59:28.264371    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:59:28.264442    3988 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:28.264460    3988 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:28.264526    3988 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:59:28.264573    3988 main.go:141] libmachine: Decoding PEM data...
	I0731 11:59:28.264588    3988 main.go:141] libmachine: Parsing certificate...
	I0731 11:59:28.265080    3988 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:59:28.423241    3988 main.go:141] libmachine: Creating SSH key...
	I0731 11:59:28.642019    3988 main.go:141] libmachine: Creating Disk image...
	I0731 11:59:28.642028    3988 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:59:28.642252    3988 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:28.651751    3988 main.go:141] libmachine: STDOUT: 
	I0731 11:59:28.651770    3988 main.go:141] libmachine: STDERR: 
	I0731 11:59:28.651824    3988 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2 +20000M
	I0731 11:59:28.660029    3988 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:59:28.660043    3988 main.go:141] libmachine: STDERR: 
	I0731 11:59:28.660057    3988 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:28.660062    3988 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:59:28.660075    3988 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:59:28.660110    3988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:27:e2:39:51:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/force-systemd-env-715000/disk.qcow2
	I0731 11:59:28.661819    3988 main.go:141] libmachine: STDOUT: 
	I0731 11:59:28.661835    3988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:59:28.661847    3988 client.go:171] duration metric: took 397.600333ms to LocalClient.Create
	I0731 11:59:30.664066    3988 start.go:128] duration metric: took 2.460534667s to createHost
	I0731 11:59:30.664157    3988 start.go:83] releasing machines lock for "force-systemd-env-715000", held for 2.461039833s
	W0731 11:59:30.664461    3988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-715000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-715000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:59:30.677938    3988 out.go:177] 
	W0731 11:59:30.681091    3988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:59:30.681123    3988 out.go:239] * 
	* 
	W0731 11:59:30.683422    3988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:59:30.692944    3988 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-715000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-715000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-715000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (87.634292ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-715000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-715000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-715000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-31 11:59:30.791394 -0700 PDT m=+2730.622797501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-715000 -n force-systemd-env-715000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-715000 -n force-systemd-env-715000: exit status 7 (36.143166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-715000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-715000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-715000
--- FAIL: TestForceSystemdEnv (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-080000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-080000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-prx58" [90a7d060-8b65-497a-9679-75e6b5169d8c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-prx58" [90a7d060-8b65-497a-9679-75e6b5169d8c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.067089542s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31128
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31128: Get "http://192.168.105.4:31128": dial tcp 192.168.105.4:31128: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-080000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-prx58
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-080000/192.168.105.4
Start Time:       Wed, 31 Jul 2024 11:24:01 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://76109825a6e6d3c5721e132d9384636476e261a350494c6bda2d5953a95d259f
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 31 Jul 2024 11:24:15 -0700
Finished:     Wed, 31 Jul 2024 11:24:15 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wq9p8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wq9p8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  33s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-prx58 to functional-080000
Normal   Pulled     20s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    20s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    20s (x3 over 33s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x4 over 32s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-prx58_default(90a7d060-8b65-497a-9679-75e6b5169d8c)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-080000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-080000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.218.175
IPs:                      10.96.218.175
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31128/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-080000 -n functional-080000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh -- ls                                                                                          | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh cat                                                                                            | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | /mount-9p/test-1722450264749385000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh stat                                                                                           | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh stat                                                                                           | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh sudo                                                                                           | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3719411143/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh -- ls                                                                                          | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh sudo                                                                                           | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-080000 ssh findmnt                                                                                        | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT | 31 Jul 24 11:24 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-080000 --dry-run                                                                                       | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-080000                                                                                                 | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-080000 | jenkins | v1.33.1 | 31 Jul 24 11:24 PDT |                     |
	|           | -p functional-080000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 11:24:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:24:30.999091    2575 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:24:30.999198    2575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.999206    2575 out.go:304] Setting ErrFile to fd 2...
	I0731 11:24:30.999208    2575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.999338    2575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:24:31.000715    2575 out.go:298] Setting JSON to false
	I0731 11:24:31.017985    2575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1440,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:24:31.018091    2575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:24:31.022180    2575 out.go:177] * [functional-080000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 11:24:31.030151    2575 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:24:31.030235    2575 notify.go:220] Checking for updates...
	I0731 11:24:31.038152    2575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:24:31.041142    2575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:24:31.042502    2575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:24:31.045171    2575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:24:31.048161    2575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:24:31.051473    2575 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:24:31.051719    2575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:24:31.056153    2575 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 11:24:31.063154    2575 start.go:297] selected driver: qemu2
	I0731 11:24:31.063162    2575 start.go:901] validating driver "qemu2" against &{Name:functional-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:24:31.063210    2575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:24:31.069098    2575 out.go:177] 
	W0731 11:24:31.073208    2575 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 11:24:31.077081    2575 out.go:177] 
	
	
	==> Docker <==
	Jul 31 18:24:26 functional-080000 dockerd[5896]: time="2024-07-31T18:24:26.961865956Z" level=info msg="ignoring event" container=63f75def3fcf1e1d4bc6d785361a47abb989c547cba95250a228214696e047d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 18:24:26 functional-080000 dockerd[5902]: time="2024-07-31T18:24:26.961955752Z" level=info msg="shim disconnected" id=63f75def3fcf1e1d4bc6d785361a47abb989c547cba95250a228214696e047d3 namespace=moby
	Jul 31 18:24:26 functional-080000 dockerd[5902]: time="2024-07-31T18:24:26.961986421Z" level=warning msg="cleaning up after shim disconnected" id=63f75def3fcf1e1d4bc6d785361a47abb989c547cba95250a228214696e047d3 namespace=moby
	Jul 31 18:24:26 functional-080000 dockerd[5902]: time="2024-07-31T18:24:26.961990504Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 18:24:28 functional-080000 dockerd[5896]: time="2024-07-31T18:24:28.149735159Z" level=info msg="ignoring event" container=5aa01e0a0fd58522df26532c4dce5b77dc5693be443796e5050cc50c9a376d6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 31 18:24:28 functional-080000 dockerd[5902]: time="2024-07-31T18:24:28.149841790Z" level=info msg="shim disconnected" id=5aa01e0a0fd58522df26532c4dce5b77dc5693be443796e5050cc50c9a376d6f namespace=moby
	Jul 31 18:24:28 functional-080000 dockerd[5902]: time="2024-07-31T18:24:28.149867583Z" level=warning msg="cleaning up after shim disconnected" id=5aa01e0a0fd58522df26532c4dce5b77dc5693be443796e5050cc50c9a376d6f namespace=moby
	Jul 31 18:24:28 functional-080000 dockerd[5902]: time="2024-07-31T18:24:28.149871875Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.927597275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.927656403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.927858247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.927918166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.934795216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.934940557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.934968475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:31 functional-080000 dockerd[5902]: time="2024-07-31T18:24:31.935044729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:32 functional-080000 cri-dockerd[6164]: time="2024-07-31T18:24:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fa9803bfe6ca62ef83ee66f99fb8ae39ab28c3fbb8ead8908da741389c4bef7/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 18:24:32 functional-080000 cri-dockerd[6164]: time="2024-07-31T18:24:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/61db2f1d5e694b31003d82e781ab0ef874e39a54f767fbf533e111c7efe7f334/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 31 18:24:32 functional-080000 dockerd[5896]: time="2024-07-31T18:24:32.249137968Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 31 18:24:34 functional-080000 cri-dockerd[6164]: time="2024-07-31T18:24:34Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Jul 31 18:24:34 functional-080000 dockerd[5902]: time="2024-07-31T18:24:34.071636894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 18:24:34 functional-080000 dockerd[5902]: time="2024-07-31T18:24:34.071818111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 18:24:34 functional-080000 dockerd[5902]: time="2024-07-31T18:24:34.071830070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:34 functional-080000 dockerd[5902]: time="2024-07-31T18:24:34.071917616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 18:24:34 functional-080000 dockerd[5896]: time="2024-07-31T18:24:34.234396965Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	1e13c54f5b6f5       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   1 second ago         Running             dashboard-metrics-scraper   0                   5fa9803bfe6ca       dashboard-metrics-scraper-b5fc48f67-psmwb
	63f75def3fcf1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    9 seconds ago        Exited              mount-munger                0                   5aa01e0a0fd58       busybox-mount
	c31a95f96674a       nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c                          17 seconds ago       Running             myfrontend                  0                   b7e5a7b57cca5       sp-pod
	76109825a6e6d       72565bf5bbedf                                                                                          20 seconds ago       Exited              echoserver-arm              2                   23d81f905d51b       hello-node-connect-6f49f58cd5-prx58
	3a41c0d974f83       72565bf5bbedf                                                                                          26 seconds ago       Exited              echoserver-arm              2                   7aa73b21ea09e       hello-node-65f5d5cc78-mgcts
	da34604998e8e       nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                          40 seconds ago       Running             nginx                       0                   510e0dc12b250       nginx-svc
	8ae721ef67b5f       2437cf7621777                                                                                          About a minute ago   Running             coredns                     2                   4da8b66446365       coredns-7db6d8ff4d-fh4k7
	b1d52ed0c1611       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         3                   da604f2db7224       storage-provisioner
	88e75753c43fd       2351f570ed0ea                                                                                          About a minute ago   Running             kube-proxy                  2                   40f88ca64625f       kube-proxy-4757g
	de71866d95c7b       014faa467e297                                                                                          About a minute ago   Running             etcd                        2                   0cce77e0e01a1       etcd-functional-080000
	ebb74fea97213       61773190d42ff                                                                                          About a minute ago   Running             kube-apiserver              0                   be058266516f4       kube-apiserver-functional-080000
	0dcae381c3e9c       8e97cdb19e7cc                                                                                          About a minute ago   Running             kube-controller-manager     2                   a47077da7333c       kube-controller-manager-functional-080000
	a6e4ffefdb447       d48f992a22722                                                                                          About a minute ago   Running             kube-scheduler              2                   24f4c312614a2       kube-scheduler-functional-080000
	bcf1a63064f70       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         2                   5a4771610feb0       storage-provisioner
	0a8b5805d2aba       2437cf7621777                                                                                          2 minutes ago        Exited              coredns                     1                   e1badb7d12bdf       coredns-7db6d8ff4d-fh4k7
	bf9e6e7cf59b3       2351f570ed0ea                                                                                          2 minutes ago        Exited              kube-proxy                  1                   e17a6ef1988c2       kube-proxy-4757g
	644b9f76a3e41       d48f992a22722                                                                                          2 minutes ago        Exited              kube-scheduler              1                   7ea52b4e9ebaa       kube-scheduler-functional-080000
	a1003bcd721ef       014faa467e297                                                                                          2 minutes ago        Exited              etcd                        1                   be4dd54fae781       etcd-functional-080000
	a935c3b449042       8e97cdb19e7cc                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   2b1c5c0830812       kube-controller-manager-functional-080000
	
	
	==> coredns [0a8b5805d2ab] <==
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44227 - 36903 "HINFO IN 9194187066028455926.1304334601339388307. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024043887s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1118165501]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:22:14.708) (total time: 30001ms):
	Trace[1118165501]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:22:44.709)
	Trace[1118165501]: [30.001151406s] [30.001151406s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[801465979]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:22:14.711) (total time: 30000ms):
	Trace[801465979]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:22:44.711)
	Trace[801465979]: [30.000181847s] [30.000181847s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[786485040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:22:14.711) (total time: 30000ms):
	Trace[786485040]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:22:44.711)
	Trace[786485040]: [30.000192884s] [30.000192884s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8ae721ef67b5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42492 - 2098 "HINFO IN 8295686486500827501.9128594608877723197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009811959s
	[INFO] 10.244.0.1:1300 - 13083 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.00009688s
	[INFO] 10.244.0.1:26736 - 10401 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000085838s
	[INFO] 10.244.0.1:52380 - 37079 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000075796s
	[INFO] 10.244.0.1:24467 - 6776 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001062099s
	[INFO] 10.244.0.1:18955 - 11150 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000055879s
	[INFO] 10.244.0.1:14166 - 32516 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000086421s
	
	
	==> describe nodes <==
	Name:               functional-080000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-080000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=functional-080000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T11_21_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:21:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-080000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:24:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:24:21 +0000   Wed, 31 Jul 2024 18:21:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:24:21 +0000   Wed, 31 Jul 2024 18:21:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:24:21 +0000   Wed, 31 Jul 2024 18:21:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:24:21 +0000   Wed, 31 Jul 2024 18:21:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-080000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c68c9295f604817b83089dc3a1ac9da
	  System UUID:                6c68c9295f604817b83089dc3a1ac9da
	  Boot ID:                    a27a6652-e712-4755-8534-daad0b84e6e0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-65f5d5cc78-mgcts                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  default                     hello-node-connect-6f49f58cd5-prx58          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 coredns-7db6d8ff4d-fh4k7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m43s
	  kube-system                 etcd-functional-080000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-apiserver-functional-080000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-080000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-proxy-4757g                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-scheduler-functional-080000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-psmwb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-fq82x        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m42s                  kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m57s                  kubelet          Node functional-080000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m57s                  kubelet          Node functional-080000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s                  kubelet          Node functional-080000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m53s                  kubelet          Node functional-080000 status is now: NodeReady
	  Normal  RegisteredNode           2m44s                  node-controller  Node functional-080000 event: Registered Node functional-080000 in Controller
	  Normal  NodeHasNoDiskPressure    2m24s (x8 over 2m24s)  kubelet          Node functional-080000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m24s (x8 over 2m24s)  kubelet          Node functional-080000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m24s (x7 over 2m24s)  kubelet          Node functional-080000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node functional-080000 event: Registered Node functional-080000 in Controller
	  Normal  Starting                 78s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-080000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-080000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-080000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           63s                    node-controller  Node functional-080000 event: Registered Node functional-080000 in Controller
	
	
	==> dmesg <==
	[ +11.747089] kauditd_printk_skb: 32 callbacks suppressed
	[ +26.199453] systemd-fstab-generator[5000]: Ignoring "noauto" option for root device
	[Jul31 18:23] systemd-fstab-generator[5422]: Ignoring "noauto" option for root device
	[  +0.055137] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.115069] systemd-fstab-generator[5455]: Ignoring "noauto" option for root device
	[  +0.103651] systemd-fstab-generator[5467]: Ignoring "noauto" option for root device
	[  +0.124357] systemd-fstab-generator[5481]: Ignoring "noauto" option for root device
	[  +5.115241] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.289491] systemd-fstab-generator[6117]: Ignoring "noauto" option for root device
	[  +0.092391] systemd-fstab-generator[6129]: Ignoring "noauto" option for root device
	[  +0.087507] systemd-fstab-generator[6141]: Ignoring "noauto" option for root device
	[  +0.105040] systemd-fstab-generator[6156]: Ignoring "noauto" option for root device
	[  +0.215382] systemd-fstab-generator[6320]: Ignoring "noauto" option for root device
	[  +0.919974] systemd-fstab-generator[6445]: Ignoring "noauto" option for root device
	[  +3.477363] kauditd_printk_skb: 200 callbacks suppressed
	[ +11.780557] kauditd_printk_skb: 29 callbacks suppressed
	[  +4.199570] systemd-fstab-generator[7488]: Ignoring "noauto" option for root device
	[  +4.966131] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.249814] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.238127] kauditd_printk_skb: 22 callbacks suppressed
	[Jul31 18:24] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.266214] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.804244] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.090625] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.897002] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [a1003bcd721e] <==
	{"level":"info","ts":"2024-07-31T18:22:12.949683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T18:22:12.949695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-31T18:22:12.949708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T18:22:12.949711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T18:22:12.949718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T18:22:12.949724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T18:22:12.951631Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-080000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:22:12.951651Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:22:12.951823Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:22:12.952729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:22:12.953473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T18:22:12.957237Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:22:12.957275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:23:03.695712Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T18:23:03.695763Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-080000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-31T18:23:03.695809Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T18:23:03.695852Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/07/31 18:23:03 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 18:23:03 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T18:23:03.705482Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T18:23:03.705509Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T18:23:03.705532Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-31T18:23:03.706634Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T18:23:03.706671Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T18:23:03.706675Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-080000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [de71866d95c7] <==
	{"level":"info","ts":"2024-07-31T18:23:18.540035Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T18:23:18.540151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-31T18:23:18.540189Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-31T18:23:18.540249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:23:18.540349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T18:23:18.543937Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T18:23:18.544012Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T18:23:18.544025Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T18:23:18.544076Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T18:23:18.544086Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-31T18:23:19.831832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-31T18:23:19.831982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T18:23:19.832056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-31T18:23:19.832096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T18:23:19.832118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T18:23:19.832146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-31T18:23:19.832164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-31T18:23:19.839413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:23:19.839816Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T18:23:19.839883Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T18:23:19.839409Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-080000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T18:23:19.839715Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T18:23:19.843469Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T18:23:19.843512Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-31T18:23:51.623266Z","caller":"traceutil/trace.go:171","msg":"trace[2046066069] transaction","detail":"{read_only:false; response_revision:680; number_of_response:1; }","duration":"138.54209ms","start":"2024-07-31T18:23:51.484714Z","end":"2024-07-31T18:23:51.623256Z","steps":["trace[2046066069] 'process raft request'  (duration: 134.273394ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:24:35 up 3 min,  0 users,  load average: 0.74, 0.49, 0.21
	Linux functional-080000 5.10.207 #1 SMP PREEMPT Mon Jul 29 12:07:32 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebb74fea9721] <==
	I0731 18:23:20.466237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 18:23:20.466240       1 cache.go:39] Caches are synced for autoregister controller
	I0731 18:23:20.466668       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 18:23:20.466949       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 18:23:20.466983       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 18:23:20.467026       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 18:23:20.467164       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 18:23:20.469556       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 18:23:20.486472       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 18:23:21.366494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 18:23:21.625959       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 18:23:21.630857       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 18:23:21.641705       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 18:23:21.649247       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 18:23:21.651221       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 18:23:32.699380       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 18:23:32.702379       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 18:23:41.865454       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.10.29"}
	I0731 18:23:47.072317       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 18:23:47.116462       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.63.121"}
	I0731 18:23:51.150999       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.212.43"}
	I0731 18:24:01.547304       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.218.175"}
	I0731 18:24:31.529379       1 controller.go:615] quota admission added evaluator for: namespaces
	I0731 18:24:31.596435       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.198.166"}
	I0731 18:24:31.617486       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.117.42"}
	
	
	==> kube-controller-manager [0dcae381c3e9] <==
	I0731 18:24:15.995767       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.918µs"
	I0731 18:24:21.571878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="39.086µs"
	I0731 18:24:31.554922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.564217ms"
	E0731 18:24:31.554941       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.558006       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="3.05111ms"
	E0731 18:24:31.558028       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.561339       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="8.956778ms"
	E0731 18:24:31.561553       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.564813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="3.163866ms"
	E0731 18:24:31.564831       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.564868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="4.842616ms"
	E0731 18:24:31.564878       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.572409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="4.918577ms"
	E0731 18:24:31.572486       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0731 18:24:31.575099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="18.376µs"
	I0731 18:24:31.593669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="5.560151ms"
	I0731 18:24:31.602144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="8.449962ms"
	I0731 18:24:31.602171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.167µs"
	I0731 18:24:31.609536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="11.668µs"
	I0731 18:24:31.609789       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="17.261983ms"
	I0731 18:24:31.615754       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="5.801789ms"
	I0731 18:24:31.616076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="302.556µs"
	I0731 18:24:31.616161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="68.879µs"
	I0731 18:24:34.137372       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="2.878727ms"
	I0731 18:24:34.138180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="9.833µs"
	
	
	==> kube-controller-manager [a935c3b44904] <==
	I0731 18:22:26.588197       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 18:22:26.588228       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 18:22:26.588266       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 18:22:26.588278       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 18:22:26.588818       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 18:22:26.589907       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 18:22:26.597301       1 shared_informer.go:320] Caches are synced for HPA
	I0731 18:22:26.597346       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 18:22:26.622176       1 shared_informer.go:320] Caches are synced for deployment
	I0731 18:22:26.622232       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 18:22:26.622248       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 18:22:26.622256       1 shared_informer.go:320] Caches are synced for disruption
	I0731 18:22:26.622263       1 shared_informer.go:320] Caches are synced for expand
	I0731 18:22:26.622475       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 18:22:26.622477       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 18:22:26.824921       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 18:22:26.826623       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 18:22:26.871449       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 18:22:26.874176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="289.121782ms"
	I0731 18:22:26.874365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.968µs"
	I0731 18:22:27.236691       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 18:22:27.292946       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 18:22:27.292958       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 18:22:51.551386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="2.876444ms"
	I0731 18:22:51.551986       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="15.003µs"
	
	
	==> kube-proxy [88e75753c43f] <==
	I0731 18:23:21.075454       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:23:21.083490       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 18:23:21.092477       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:23:21.092494       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:23:21.092501       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:23:21.093098       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:23:21.093191       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:23:21.093202       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:23:21.093585       1 config.go:192] "Starting service config controller"
	I0731 18:23:21.093595       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:23:21.093634       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:23:21.093640       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:23:21.093879       1 config.go:319] "Starting node config controller"
	I0731 18:23:21.093913       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:23:21.194354       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:23:21.194371       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:23:21.194382       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bf9e6e7cf59b] <==
	I0731 18:22:14.701364       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:22:14.707085       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0731 18:22:14.719642       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:22:14.719663       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:22:14.719672       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:22:14.720282       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:22:14.720355       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:22:14.720364       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:22:14.720727       1 config.go:192] "Starting service config controller"
	I0731 18:22:14.720737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:22:14.720752       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:22:14.720754       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:22:14.720988       1 config.go:319] "Starting node config controller"
	I0731 18:22:14.720999       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:22:14.821057       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:22:14.821080       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:22:14.821096       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [644b9f76a3e4] <==
	I0731 18:22:12.337883       1 serving.go:380] Generated self-signed cert in-memory
	W0731 18:22:13.459582       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 18:22:13.459602       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:22:13.459607       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 18:22:13.459610       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 18:22:13.487411       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 18:22:13.487513       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:22:13.488368       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 18:22:13.488428       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 18:22:13.488469       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 18:22:13.488489       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 18:22:13.588692       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 18:23:03.709352       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a6e4ffefdb44] <==
	I0731 18:23:18.562842       1 serving.go:380] Generated self-signed cert in-memory
	W0731 18:23:20.386173       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 18:23:20.386286       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:23:20.386311       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 18:23:20.386333       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 18:23:20.415233       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 18:23:20.415268       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:23:20.417539       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 18:23:20.418373       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 18:23:20.418384       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 18:23:20.418391       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 18:23:20.519121       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:24:17 functional-080000 kubelet[6452]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:24:17 functional-080000 kubelet[6452]: I0731 18:24:17.655217    6452 scope.go:117] "RemoveContainer" containerID="a91a948f0234ba7e582a4fb08250907388f1487ae3f2422028bce70719412478"
	Jul 31 18:24:21 functional-080000 kubelet[6452]: I0731 18:24:21.566272    6452 scope.go:117] "RemoveContainer" containerID="3a41c0d974f83d10c125f3dad3a3045cca03ec35eaee16d43c253bcf966b7cdb"
	Jul 31 18:24:21 functional-080000 kubelet[6452]: E0731 18:24:21.566537    6452 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-mgcts_default(f651bfb2-fe9f-4d10-9086-f6ae6692d12c)\"" pod="default/hello-node-65f5d5cc78-mgcts" podUID="f651bfb2-fe9f-4d10-9086-f6ae6692d12c"
	Jul 31 18:24:21 functional-080000 kubelet[6452]: I0731 18:24:21.571543    6452 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=3.873498872 podStartE2EDuration="4.571531986s" podCreationTimestamp="2024-07-31 18:24:17 +0000 UTC" firstStartedPulling="2024-07-31 18:24:17.507337929 +0000 UTC m=+60.016294220" lastFinishedPulling="2024-07-31 18:24:18.205371043 +0000 UTC m=+60.714327334" observedRunningTime="2024-07-31 18:24:19.042620947 +0000 UTC m=+61.551577280" watchObservedRunningTime="2024-07-31 18:24:21.571531986 +0000 UTC m=+64.080488277"
	Jul 31 18:24:25 functional-080000 kubelet[6452]: I0731 18:24:25.376777    6452 topology_manager.go:215] "Topology Admit Handler" podUID="43e249a6-ac8b-4625-85ac-9ae54bcdeb04" podNamespace="default" podName="busybox-mount"
	Jul 31 18:24:25 functional-080000 kubelet[6452]: I0731 18:24:25.450511    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfjzl\" (UniqueName: \"kubernetes.io/projected/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-kube-api-access-lfjzl\") pod \"busybox-mount\" (UID: \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\") " pod="default/busybox-mount"
	Jul 31 18:24:25 functional-080000 kubelet[6452]: I0731 18:24:25.450535    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-test-volume\") pod \"busybox-mount\" (UID: \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\") " pod="default/busybox-mount"
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.268125    6452 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfjzl\" (UniqueName: \"kubernetes.io/projected/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-kube-api-access-lfjzl\") pod \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\" (UID: \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\") "
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.268147    6452 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-test-volume\") pod \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\" (UID: \"43e249a6-ac8b-4625-85ac-9ae54bcdeb04\") "
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.268182    6452 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-test-volume" (OuterVolumeSpecName: "test-volume") pod "43e249a6-ac8b-4625-85ac-9ae54bcdeb04" (UID: "43e249a6-ac8b-4625-85ac-9ae54bcdeb04"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.271097    6452 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-kube-api-access-lfjzl" (OuterVolumeSpecName: "kube-api-access-lfjzl") pod "43e249a6-ac8b-4625-85ac-9ae54bcdeb04" (UID: "43e249a6-ac8b-4625-85ac-9ae54bcdeb04"). InnerVolumeSpecName "kube-api-access-lfjzl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.368835    6452 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-test-volume\") on node \"functional-080000\" DevicePath \"\""
	Jul 31 18:24:28 functional-080000 kubelet[6452]: I0731 18:24:28.368850    6452 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lfjzl\" (UniqueName: \"kubernetes.io/projected/43e249a6-ac8b-4625-85ac-9ae54bcdeb04-kube-api-access-lfjzl\") on node \"functional-080000\" DevicePath \"\""
	Jul 31 18:24:29 functional-080000 kubelet[6452]: I0731 18:24:29.084127    6452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa01e0a0fd58522df26532c4dce5b77dc5693be443796e5050cc50c9a376d6f"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.565561    6452 scope.go:117] "RemoveContainer" containerID="76109825a6e6d3c5721e132d9384636476e261a350494c6bda2d5953a95d259f"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: E0731 18:24:31.565645    6452 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-prx58_default(90a7d060-8b65-497a-9679-75e6b5169d8c)\"" pod="default/hello-node-connect-6f49f58cd5-prx58" podUID="90a7d060-8b65-497a-9679-75e6b5169d8c"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.595771    6452 topology_manager.go:215] "Topology Admit Handler" podUID="1bc2d595-a700-4156-ab5d-4745e1bdefd7" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-b5fc48f67-psmwb"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: E0731 18:24:31.595829    6452 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="43e249a6-ac8b-4625-85ac-9ae54bcdeb04" containerName="mount-munger"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.595846    6452 memory_manager.go:354] "RemoveStaleState removing state" podUID="43e249a6-ac8b-4625-85ac-9ae54bcdeb04" containerName="mount-munger"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.606963    6452 topology_manager.go:215] "Topology Admit Handler" podUID="1cbd07bc-0adc-4ea3-8314-cc349a0636fc" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-779776cb65-fq82x"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.690970    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8md6b\" (UniqueName: \"kubernetes.io/projected/1bc2d595-a700-4156-ab5d-4745e1bdefd7-kube-api-access-8md6b\") pod \"dashboard-metrics-scraper-b5fc48f67-psmwb\" (UID: \"1bc2d595-a700-4156-ab5d-4745e1bdefd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-psmwb"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.691003    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1bc2d595-a700-4156-ab5d-4745e1bdefd7-tmp-volume\") pod \"dashboard-metrics-scraper-b5fc48f67-psmwb\" (UID: \"1bc2d595-a700-4156-ab5d-4745e1bdefd7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-psmwb"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.691013    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1cbd07bc-0adc-4ea3-8314-cc349a0636fc-tmp-volume\") pod \"kubernetes-dashboard-779776cb65-fq82x\" (UID: \"1cbd07bc-0adc-4ea3-8314-cc349a0636fc\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-fq82x"
	Jul 31 18:24:31 functional-080000 kubelet[6452]: I0731 18:24:31.691021    6452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75qg\" (UniqueName: \"kubernetes.io/projected/1cbd07bc-0adc-4ea3-8314-cc349a0636fc-kube-api-access-z75qg\") pod \"kubernetes-dashboard-779776cb65-fq82x\" (UID: \"1cbd07bc-0adc-4ea3-8314-cc349a0636fc\") " pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-fq82x"
	
	
	==> storage-provisioner [b1d52ed0c161] <==
	I0731 18:23:21.073212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:23:21.081794       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:23:21.082566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:23:38.471107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:23:38.471305       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-080000_1f57cbe1-a2ba-460a-a84c-f5731d766b33!
	I0731 18:23:38.471701       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c717a7f0-417e-4ec3-b274-5125911f8172", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-080000_1f57cbe1-a2ba-460a-a84c-f5731d766b33 became leader
	I0731 18:23:38.572210       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-080000_1f57cbe1-a2ba-460a-a84c-f5731d766b33!
	I0731 18:24:03.681316       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0731 18:24:03.681674       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c495f470-e586-439c-b0ad-836b03e20feb", APIVersion:"v1", ResourceVersion:"739", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0731 18:24:03.681403       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    e10efc17-75a0-4722-9638-286678ae5fdc 357 0 2024-07-31 18:21:52 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-31 18:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c495f470-e586-439c-b0ad-836b03e20feb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c495f470-e586-439c-b0ad-836b03e20feb 739 0 2024-07-31 18:24:03 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-31 18:24:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-31 18:24:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0731 18:24:03.682035       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c495f470-e586-439c-b0ad-836b03e20feb" provisioned
	I0731 18:24:03.682063       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0731 18:24:03.682082       1 volume_store.go:212] Trying to save persistentvolume "pvc-c495f470-e586-439c-b0ad-836b03e20feb"
	I0731 18:24:03.688081       1 volume_store.go:219] persistentvolume "pvc-c495f470-e586-439c-b0ad-836b03e20feb" saved
	I0731 18:24:03.688317       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c495f470-e586-439c-b0ad-836b03e20feb", APIVersion:"v1", ResourceVersion:"739", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c495f470-e586-439c-b0ad-836b03e20feb
	
	
	==> storage-provisioner [bcf1a63064f7] <==
	I0731 18:22:26.253343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:22:26.256645       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:22:26.256660       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:22:43.643014       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:22:43.643248       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c717a7f0-417e-4ec3-b274-5125911f8172", APIVersion:"v1", ResourceVersion:"497", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-080000_71972fba-098a-4793-be42-1a4aeda37ef9 became leader
	I0731 18:22:43.643305       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-080000_71972fba-098a-4793-be42-1a4aeda37ef9!
	I0731 18:22:43.744053       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-080000_71972fba-098a-4793-be42-1a4aeda37ef9!
	E0731 18:23:03.703678       1 leaderelection.go:361] Failed to update lock: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-080000 -n functional-080000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-080000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-779776cb65-fq82x
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-080000 describe pod busybox-mount kubernetes-dashboard-779776cb65-fq82x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-080000 describe pod busybox-mount kubernetes-dashboard-779776cb65-fq82x: exit status 1 (43.862584ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-080000/192.168.105.4
	Start Time:       Wed, 31 Jul 2024 11:24:25 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://63f75def3fcf1e1d4bc6d785361a47abb989c547cba95250a228214696e047d3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 31 Jul 2024 11:24:26 -0700
	      Finished:     Wed, 31 Jul 2024 11:24:26 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lfjzl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-lfjzl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10s   default-scheduler  Successfully assigned default/busybox-mount to functional-080000
	  Normal  Pulling    11s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.133s (1.133s including waiting). Image size: 3547125 bytes.
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-fq82x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-080000 describe pod busybox-mount kubernetes-dashboard-779776cb65-fq82x: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr
E0731 11:29:07.575616    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-688000 node stop m02 -v=7 --alsologtostderr: (12.185521041s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
E0731 11:29:28.057613    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:30:09.018984    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:31:30.939785    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:31:45.977402    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (3m45.048185542s)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-688000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-688000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:29:11.261384    2873 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:29:11.261559    2873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:29:11.261563    2873 out.go:304] Setting ErrFile to fd 2...
	I0731 11:29:11.261566    2873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:29:11.261725    2873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:29:11.261835    2873 out.go:298] Setting JSON to false
	I0731 11:29:11.261849    2873 mustload.go:65] Loading cluster: ha-688000
	I0731 11:29:11.261948    2873 notify.go:220] Checking for updates...
	I0731 11:29:11.262104    2873 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:29:11.262113    2873 status.go:255] checking status of ha-688000 ...
	I0731 11:29:11.262884    2873 status.go:330] ha-688000 host status = "Running" (err=<nil>)
	I0731 11:29:11.262896    2873 host.go:66] Checking if "ha-688000" exists ...
	I0731 11:29:11.262995    2873 host.go:66] Checking if "ha-688000" exists ...
	I0731 11:29:11.263110    2873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:29:11.263118    2873 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/id_rsa Username:docker}
	W0731 11:30:26.263654    2873 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 11:30:26.263802    2873 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 11:30:26.263824    2873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 11:30:26.263848    2873 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:30:26.263870    2873 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 11:30:26.263880    2873 status.go:255] checking status of ha-688000-m02 ...
	I0731 11:30:26.264093    2873 status.go:330] ha-688000-m02 host status = "Stopped" (err=<nil>)
	I0731 11:30:26.264098    2873 status.go:343] host is not running, skipping remaining checks
	I0731 11:30:26.264101    2873 status.go:257] ha-688000-m02 status: &{Name:ha-688000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:30:26.264105    2873 status.go:255] checking status of ha-688000-m03 ...
	I0731 11:30:26.264732    2873 status.go:330] ha-688000-m03 host status = "Running" (err=<nil>)
	I0731 11:30:26.264740    2873 host.go:66] Checking if "ha-688000-m03" exists ...
	I0731 11:30:26.264863    2873 host.go:66] Checking if "ha-688000-m03" exists ...
	I0731 11:30:26.264981    2873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:30:26.264990    2873 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m03/id_rsa Username:docker}
	W0731 11:31:41.266296    2873 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 11:31:41.266342    2873 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 11:31:41.266353    2873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 11:31:41.266357    2873 status.go:257] ha-688000-m03 status: &{Name:ha-688000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:31:41.266365    2873 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 11:31:41.266373    2873 status.go:255] checking status of ha-688000-m04 ...
	I0731 11:31:41.267044    2873 status.go:330] ha-688000-m04 host status = "Running" (err=<nil>)
	I0731 11:31:41.267052    2873 host.go:66] Checking if "ha-688000-m04" exists ...
	I0731 11:31:41.267160    2873 host.go:66] Checking if "ha-688000-m04" exists ...
	I0731 11:31:41.267284    2873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:31:41.267289    2873 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m04/id_rsa Username:docker}
	W0731 11:32:56.263392    2873 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 11:32:56.263563    2873 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 11:32:56.263601    2873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 11:32:56.263623    2873 status.go:257] ha-688000-m04 status: &{Name:ha-688000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:32:56.263675    2873 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-688000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-688000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr": ha-688000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-688000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-688000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
E0731 11:33:47.070179    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 3 (1m15.076304125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:34:11.340844    3214 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 11:34:11.340900    3214 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0731 11:34:14.774198    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.094773292s)
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
E0731 11:36:45.966949    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 3 (1m15.038339208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:37:56.471030    3236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 11:37:56.471073    3236 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.129674s)

                                                
                                                
-- stdout --
	* Starting "ha-688000-m02" control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:37:56.534516    3247 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:37:56.534799    3247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:37:56.534804    3247 out.go:304] Setting ErrFile to fd 2...
	I0731 11:37:56.534807    3247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:37:56.534978    3247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:37:56.535296    3247 mustload.go:65] Loading cluster: ha-688000
	I0731 11:37:56.535602    3247 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 11:37:56.535887    3247 host.go:58] "ha-688000-m02" host status: Stopped
	I0731 11:37:56.540517    3247 out.go:177] * Starting "ha-688000-m02" control-plane node in "ha-688000" cluster
	I0731 11:37:56.546509    3247 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:37:56.546527    3247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:37:56.546551    3247 cache.go:56] Caching tarball of preloaded images
	I0731 11:37:56.546652    3247 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:37:56.546659    3247 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:37:56.546754    3247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/ha-688000/config.json ...
	I0731 11:37:56.547455    3247 start.go:360] acquireMachinesLock for ha-688000-m02: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:37:56.547510    3247 start.go:364] duration metric: took 34.75µs to acquireMachinesLock for "ha-688000-m02"
	I0731 11:37:56.547527    3247 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:37:56.547533    3247 fix.go:54] fixHost starting: m02
	I0731 11:37:56.547704    3247 fix.go:112] recreateIfNeeded on ha-688000-m02: state=Stopped err=<nil>
	W0731 11:37:56.547711    3247 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:37:56.551536    3247 out.go:177] * Restarting existing qemu2 VM for "ha-688000-m02" ...
	I0731 11:37:56.555377    3247 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:37:56.555462    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:91:98:de:6d:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/disk.qcow2
	I0731 11:37:56.558611    3247 main.go:141] libmachine: STDOUT: 
	I0731 11:37:56.558633    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:37:56.558664    3247 fix.go:56] duration metric: took 11.131209ms for fixHost
	I0731 11:37:56.558669    3247 start.go:83] releasing machines lock for "ha-688000-m02", held for 11.1545ms
	W0731 11:37:56.558677    3247 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:37:56.558718    3247 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:37:56.558724    3247 start.go:729] Will try again in 5 seconds ...
	I0731 11:38:01.560818    3247 start.go:360] acquireMachinesLock for ha-688000-m02: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:38:01.561311    3247 start.go:364] duration metric: took 403.333µs to acquireMachinesLock for "ha-688000-m02"
	I0731 11:38:01.561476    3247 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:38:01.561501    3247 fix.go:54] fixHost starting: m02
	I0731 11:38:01.562458    3247 fix.go:112] recreateIfNeeded on ha-688000-m02: state=Stopped err=<nil>
	W0731 11:38:01.562490    3247 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:38:01.567076    3247 out.go:177] * Restarting existing qemu2 VM for "ha-688000-m02" ...
	I0731 11:38:01.572053    3247 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:38:01.572305    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:91:98:de:6d:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/disk.qcow2
	I0731 11:38:01.579281    3247 main.go:141] libmachine: STDOUT: 
	I0731 11:38:01.579332    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:38:01.579415    3247 fix.go:56] duration metric: took 17.919791ms for fixHost
	I0731 11:38:01.579434    3247 start.go:83] releasing machines lock for "ha-688000-m02", held for 18.103208ms
	W0731 11:38:01.579607    3247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:38:01.583982    3247 out.go:177] 
	W0731 11:38:01.588110    3247 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:38:01.588130    3247 out.go:239] * 
	* 
	W0731 11:38:01.594065    3247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:38:01.598994    3247 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0731 11:37:56.534516    3247 out.go:291] Setting OutFile to fd 1 ...
I0731 11:37:56.534799    3247 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:37:56.534804    3247 out.go:304] Setting ErrFile to fd 2...
I0731 11:37:56.534807    3247 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:37:56.534978    3247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:37:56.535296    3247 mustload.go:65] Loading cluster: ha-688000
I0731 11:37:56.535602    3247 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0731 11:37:56.535887    3247 host.go:58] "ha-688000-m02" host status: Stopped
I0731 11:37:56.540517    3247 out.go:177] * Starting "ha-688000-m02" control-plane node in "ha-688000" cluster
I0731 11:37:56.546509    3247 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0731 11:37:56.546527    3247 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0731 11:37:56.546551    3247 cache.go:56] Caching tarball of preloaded images
I0731 11:37:56.546652    3247 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0731 11:37:56.546659    3247 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0731 11:37:56.546754    3247 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/ha-688000/config.json ...
I0731 11:37:56.547455    3247 start.go:360] acquireMachinesLock for ha-688000-m02: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 11:37:56.547510    3247 start.go:364] duration metric: took 34.75µs to acquireMachinesLock for "ha-688000-m02"
I0731 11:37:56.547527    3247 start.go:96] Skipping create...Using existing machine configuration
I0731 11:37:56.547533    3247 fix.go:54] fixHost starting: m02
I0731 11:37:56.547704    3247 fix.go:112] recreateIfNeeded on ha-688000-m02: state=Stopped err=<nil>
W0731 11:37:56.547711    3247 fix.go:138] unexpected machine state, will restart: <nil>
I0731 11:37:56.551536    3247 out.go:177] * Restarting existing qemu2 VM for "ha-688000-m02" ...
I0731 11:37:56.555377    3247 qemu.go:418] Using hvf for hardware acceleration
I0731 11:37:56.555462    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:91:98:de:6d:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/disk.qcow2
I0731 11:37:56.558611    3247 main.go:141] libmachine: STDOUT: 
I0731 11:37:56.558633    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 11:37:56.558664    3247 fix.go:56] duration metric: took 11.131209ms for fixHost
I0731 11:37:56.558669    3247 start.go:83] releasing machines lock for "ha-688000-m02", held for 11.1545ms
W0731 11:37:56.558677    3247 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 11:37:56.558718    3247 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 11:37:56.558724    3247 start.go:729] Will try again in 5 seconds ...
I0731 11:38:01.560818    3247 start.go:360] acquireMachinesLock for ha-688000-m02: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0731 11:38:01.561311    3247 start.go:364] duration metric: took 403.333µs to acquireMachinesLock for "ha-688000-m02"
I0731 11:38:01.561476    3247 start.go:96] Skipping create...Using existing machine configuration
I0731 11:38:01.561501    3247 fix.go:54] fixHost starting: m02
I0731 11:38:01.562458    3247 fix.go:112] recreateIfNeeded on ha-688000-m02: state=Stopped err=<nil>
W0731 11:38:01.562490    3247 fix.go:138] unexpected machine state, will restart: <nil>
I0731 11:38:01.567076    3247 out.go:177] * Restarting existing qemu2 VM for "ha-688000-m02" ...
I0731 11:38:01.572053    3247 qemu.go:418] Using hvf for hardware acceleration
I0731 11:38:01.572305    3247 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:91:98:de:6d:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/disk.qcow2
I0731 11:38:01.579281    3247 main.go:141] libmachine: STDOUT: 
I0731 11:38:01.579332    3247 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0731 11:38:01.579415    3247 fix.go:56] duration metric: took 17.919791ms for fixHost
I0731 11:38:01.579434    3247 start.go:83] releasing machines lock for "ha-688000-m02", held for 18.103208ms
W0731 11:38:01.579607    3247 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0731 11:38:01.583982    3247 out.go:177] 
W0731 11:38:01.588110    3247 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0731 11:38:01.588130    3247 out.go:239] * 
* 
W0731 11:38:01.594065    3247 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 11:38:01.598994    3247 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
E0731 11:38:09.031757    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:38:47.064954    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:41:45.961436    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (3m45.067238959s)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-688000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-688000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:38:01.658783    3251 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:38:01.658982    3251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:38:01.658989    3251 out.go:304] Setting ErrFile to fd 2...
	I0731 11:38:01.658992    3251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:38:01.659155    3251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:38:01.659306    3251 out.go:298] Setting JSON to false
	I0731 11:38:01.659316    3251 mustload.go:65] Loading cluster: ha-688000
	I0731 11:38:01.659350    3251 notify.go:220] Checking for updates...
	I0731 11:38:01.659562    3251 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:38:01.659570    3251 status.go:255] checking status of ha-688000 ...
	I0731 11:38:01.660366    3251 status.go:330] ha-688000 host status = "Running" (err=<nil>)
	I0731 11:38:01.660378    3251 host.go:66] Checking if "ha-688000" exists ...
	I0731 11:38:01.660509    3251 host.go:66] Checking if "ha-688000" exists ...
	I0731 11:38:01.660640    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:38:01.660649    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/id_rsa Username:docker}
	W0731 11:39:16.659854    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0731 11:39:16.660100    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 11:39:16.660140    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 11:39:16.660157    3251 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:39:16.660194    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0731 11:39:16.660208    3251 status.go:255] checking status of ha-688000-m02 ...
	I0731 11:39:16.660953    3251 status.go:330] ha-688000-m02 host status = "Stopped" (err=<nil>)
	I0731 11:39:16.660973    3251 status.go:343] host is not running, skipping remaining checks
	I0731 11:39:16.660980    3251 status.go:257] ha-688000-m02 status: &{Name:ha-688000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:39:16.660998    3251 status.go:255] checking status of ha-688000-m03 ...
	I0731 11:39:16.662815    3251 status.go:330] ha-688000-m03 host status = "Running" (err=<nil>)
	I0731 11:39:16.662830    3251 host.go:66] Checking if "ha-688000-m03" exists ...
	I0731 11:39:16.663120    3251 host.go:66] Checking if "ha-688000-m03" exists ...
	I0731 11:39:16.663494    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:39:16.663516    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m03/id_rsa Username:docker}
	W0731 11:40:31.664160    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 11:40:31.664255    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0731 11:40:31.664271    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 11:40:31.664281    3251 status.go:257] ha-688000-m03 status: &{Name:ha-688000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:40:31.664299    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 11:40:31.664308    3251 status.go:255] checking status of ha-688000-m04 ...
	I0731 11:40:31.666097    3251 status.go:330] ha-688000-m04 host status = "Running" (err=<nil>)
	I0731 11:40:31.666111    3251 host.go:66] Checking if "ha-688000-m04" exists ...
	I0731 11:40:31.666330    3251 host.go:66] Checking if "ha-688000-m04" exists ...
	I0731 11:40:31.666586    3251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 11:40:31.666600    3251 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m04/id_rsa Username:docker}
	W0731 11:41:46.667879    3251 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 11:41:46.667919    3251 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0731 11:41:46.667927    3251 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 11:41:46.667930    3251 status.go:257] ha-688000-m04 status: &{Name:ha-688000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 11:41:46.667939    3251 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 3 (1m15.038668875s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 11:43:01.702125    3269 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0731 11:43:01.702172    3269 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (329.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr
E0731 11:46:45.956288    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:48:47.020951    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-688000 -v=7 --alsologtostderr: (5m24.1622235s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-688000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.22942775s)

                                                
                                                
-- stdout --
	* [ha-688000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-688000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:50:56.065659    3333 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:50:56.065858    3333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:50:56.065863    3333 out.go:304] Setting ErrFile to fd 2...
	I0731 11:50:56.065867    3333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:50:56.066054    3333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:50:56.067433    3333 out.go:298] Setting JSON to false
	I0731 11:50:56.088256    3333 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3025,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:50:56.088331    3333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:50:56.093254    3333 out.go:177] * [ha-688000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:50:56.101069    3333 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:50:56.101115    3333 notify.go:220] Checking for updates...
	I0731 11:50:56.109171    3333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:50:56.113159    3333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:50:56.116159    3333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:50:56.119243    3333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:50:56.122183    3333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:50:56.125518    3333 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:50:56.125573    3333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:50:56.130221    3333 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 11:50:56.137130    3333 start.go:297] selected driver: qemu2
	I0731 11:50:56.137136    3333 start.go:901] validating driver "qemu2" against &{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-688000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:50:56.137211    3333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:50:56.140160    3333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:50:56.140206    3333 cni.go:84] Creating CNI manager for ""
	I0731 11:50:56.140212    3333 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 11:50:56.140263    3333 start.go:340] cluster config:
	{Name:ha-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-688000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:50:56.144772    3333 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:50:56.153137    3333 out.go:177] * Starting "ha-688000" primary control-plane node in "ha-688000" cluster
	I0731 11:50:56.157221    3333 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:50:56.157234    3333 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:50:56.157244    3333 cache.go:56] Caching tarball of preloaded images
	I0731 11:50:56.157306    3333 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:50:56.157312    3333 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:50:56.157379    3333 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/ha-688000/config.json ...
	I0731 11:50:56.157849    3333 start.go:360] acquireMachinesLock for ha-688000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:50:56.157887    3333 start.go:364] duration metric: took 30.708µs to acquireMachinesLock for "ha-688000"
	I0731 11:50:56.157896    3333 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:50:56.157904    3333 fix.go:54] fixHost starting: 
	I0731 11:50:56.158031    3333 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0731 11:50:56.158039    3333 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:50:56.161223    3333 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0731 11:50:56.169161    3333 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:50:56.169204    3333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:3d:3c:3c:3b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/disk.qcow2
	I0731 11:50:56.171459    3333 main.go:141] libmachine: STDOUT: 
	I0731 11:50:56.171482    3333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:50:56.171513    3333 fix.go:56] duration metric: took 13.610791ms for fixHost
	I0731 11:50:56.171521    3333 start.go:83] releasing machines lock for "ha-688000", held for 13.629833ms
	W0731 11:50:56.171537    3333 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:50:56.171581    3333 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:50:56.171586    3333 start.go:729] Will try again in 5 seconds ...
	I0731 11:51:01.173748    3333 start.go:360] acquireMachinesLock for ha-688000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:51:01.174115    3333 start.go:364] duration metric: took 281.875µs to acquireMachinesLock for "ha-688000"
	I0731 11:51:01.174319    3333 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:51:01.174339    3333 fix.go:54] fixHost starting: 
	I0731 11:51:01.175031    3333 fix.go:112] recreateIfNeeded on ha-688000: state=Stopped err=<nil>
	W0731 11:51:01.175066    3333 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:51:01.183431    3333 out.go:177] * Restarting existing qemu2 VM for "ha-688000" ...
	I0731 11:51:01.187453    3333 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:51:01.187737    3333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:3d:3c:3c:3b:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000/disk.qcow2
	I0731 11:51:01.196853    3333 main.go:141] libmachine: STDOUT: 
	I0731 11:51:01.196932    3333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:51:01.197016    3333 fix.go:56] duration metric: took 22.674167ms for fixHost
	I0731 11:51:01.197039    3333 start.go:83] releasing machines lock for "ha-688000", held for 22.901208ms
	W0731 11:51:01.197276    3333 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-688000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:51:01.204370    3333 out.go:177] 
	W0731 11:51:01.208543    3333 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:51:01.208565    3333 out.go:239] * 
	* 
	W0731 11:51:01.211376    3333 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:51:01.219321    3333 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-688000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-688000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (31.837333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (329.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr: exit status 83 (38.149834ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-688000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:51:01.359855    3345 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:51:01.360108    3345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.360111    3345 out.go:304] Setting ErrFile to fd 2...
	I0731 11:51:01.360114    3345 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.360251    3345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:51:01.360466    3345 mustload.go:65] Loading cluster: ha-688000
	I0731 11:51:01.360684    3345 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0731 11:51:01.360990    3345 out.go:239] ! The control-plane node ha-688000 host is not running (will try others): state=Stopped
	! The control-plane node ha-688000 host is not running (will try others): state=Stopped
	W0731 11:51:01.361099    3345 out.go:239] ! The control-plane node ha-688000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-688000-m02 host is not running (will try others): state=Stopped
	I0731 11:51:01.364787    3345 out.go:177] * The control-plane node ha-688000-m03 host is not running: state=Stopped
	I0731 11:51:01.367756    3345 out.go:177]   To start a cluster, run: "minikube start -p ha-688000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-688000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: exit status 7 (29.346583ms)

                                                
                                                
-- stdout --
	ha-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-688000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:51:01.397377    3347 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:51:01.397534    3347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.397538    3347 out.go:304] Setting ErrFile to fd 2...
	I0731 11:51:01.397540    3347 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.397679    3347 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:51:01.397814    3347 out.go:298] Setting JSON to false
	I0731 11:51:01.397823    3347 mustload.go:65] Loading cluster: ha-688000
	I0731 11:51:01.397881    3347 notify.go:220] Checking for updates...
	I0731 11:51:01.398063    3347 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:51:01.398070    3347 status.go:255] checking status of ha-688000 ...
	I0731 11:51:01.398291    3347 status.go:330] ha-688000 host status = "Stopped" (err=<nil>)
	I0731 11:51:01.398294    3347 status.go:343] host is not running, skipping remaining checks
	I0731 11:51:01.398296    3347 status.go:257] ha-688000 status: &{Name:ha-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:51:01.398306    3347 status.go:255] checking status of ha-688000-m02 ...
	I0731 11:51:01.398391    3347 status.go:330] ha-688000-m02 host status = "Stopped" (err=<nil>)
	I0731 11:51:01.398394    3347 status.go:343] host is not running, skipping remaining checks
	I0731 11:51:01.398396    3347 status.go:257] ha-688000-m02 status: &{Name:ha-688000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:51:01.398400    3347 status.go:255] checking status of ha-688000-m03 ...
	I0731 11:51:01.398492    3347 status.go:330] ha-688000-m03 host status = "Stopped" (err=<nil>)
	I0731 11:51:01.398495    3347 status.go:343] host is not running, skipping remaining checks
	I0731 11:51:01.398497    3347 status.go:257] ha-688000-m03 status: &{Name:ha-688000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 11:51:01.398503    3347 status.go:255] checking status of ha-688000-m04 ...
	I0731 11:51:01.398595    3347 status.go:330] ha-688000-m04 host status = "Stopped" (err=<nil>)
	I0731 11:51:01.398598    3347 status.go:343] host is not running, skipping remaining checks
	I0731 11:51:01.398600    3347 status.go:257] ha-688000-m04 status: &{Name:ha-688000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.107167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-688000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-688000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-688000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-688000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (29.193709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (218.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr
E0731 11:51:45.917860    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:53:47.015570    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr: signal: killed (3m38.217802083s)

                                                
                                                
-- stdout --
	* Stopping node "ha-688000-m04"  ...
	* Stopping node "ha-688000-m03"  ...
	* Stopping node "ha-688000-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:51:01.534318    3356 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:51:01.534473    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.534476    3356 out.go:304] Setting ErrFile to fd 2...
	I0731 11:51:01.534479    3356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:51:01.534616    3356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:51:01.534827    3356 out.go:298] Setting JSON to false
	I0731 11:51:01.534924    3356 mustload.go:65] Loading cluster: ha-688000
	I0731 11:51:01.535152    3356 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:51:01.535207    3356 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/ha-688000/config.json ...
	I0731 11:51:01.535456    3356 mustload.go:65] Loading cluster: ha-688000
	I0731 11:51:01.535535    3356 config.go:182] Loaded profile config "ha-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:51:01.535558    3356 stop.go:39] StopHost: ha-688000-m04
	I0731 11:51:01.540687    3356 out.go:177] * Stopping node "ha-688000-m04"  ...
	I0731 11:51:01.548601    3356 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 11:51:01.548640    3356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 11:51:01.548649    3356 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m04/id_rsa Username:docker}
	W0731 11:52:16.549178    3356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0731 11:52:16.549482    3356 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0731 11:52:16.549638    3356 main.go:141] libmachine: Stopping "ha-688000-m04"...
	I0731 11:52:16.549775    3356 stop.go:66] stop err: Machine "ha-688000-m04" is already stopped.
	I0731 11:52:16.549834    3356 stop.go:69] host is already stopped
	I0731 11:52:16.549864    3356 stop.go:39] StopHost: ha-688000-m03
	I0731 11:52:16.554936    3356 out.go:177] * Stopping node "ha-688000-m03"  ...
	I0731 11:52:16.562892    3356 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 11:52:16.563137    3356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 11:52:16.563169    3356 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m03/id_rsa Username:docker}
	W0731 11:53:31.564222    3356 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0731 11:53:31.564430    3356 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0731 11:53:31.564572    3356 main.go:141] libmachine: Stopping "ha-688000-m03"...
	I0731 11:53:31.564719    3356 stop.go:66] stop err: Machine "ha-688000-m03" is already stopped.
	I0731 11:53:31.564748    3356 stop.go:69] host is already stopped
	I0731 11:53:31.564777    3356 stop.go:39] StopHost: ha-688000-m02
	I0731 11:53:31.573141    3356 out.go:177] * Stopping node "ha-688000-m02"  ...
	I0731 11:53:31.577087    3356 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 11:53:31.577224    3356 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 11:53:31.577259    3356 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/ha-688000-m02/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-688000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr: context deadline exceeded (2.083µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-688000 -n ha-688000: exit status 7 (67.976959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-688000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (218.29s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-666000 --driver=qemu2 
E0731 11:54:48.982851    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-666000 --driver=qemu2 : exit status 80 (10.156972208s)

                                                
                                                
-- stdout --
	* [image-666000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-666000" primary control-plane node in "image-666000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-666000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-666000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-666000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-666000 -n image-666000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-666000 -n image-666000: exit status 7 (66.134125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-666000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-011000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-011000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.765655125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e758a073-644a-4e8e-89ce-da30aee61d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-011000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f81e35e7-1d2c-4f9f-a0ab-15f72462be3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19356"}}
	{"specversion":"1.0","id":"a36c31b3-6903-48bf-84a2-3cf064912307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig"}}
	{"specversion":"1.0","id":"467a9559-cc3e-4ad7-9be6-cb6c11fe3573","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bff07118-065b-4219-8609-ca472f7fb7d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"97aa4d63-c951-4539-9019-73f3ed71dd7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube"}}
	{"specversion":"1.0","id":"9aa9d1d5-ab2c-4425-9027-6a76cd542314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a108c4e-3011-4ff8-89f2-87aa163f287d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"66043386-9ac0-456a-86ec-36735221e5f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"af249a41-044e-45bc-8ed7-330a77af2b68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-011000\" primary control-plane node in \"json-output-011000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"122c5b41-af44-4c52-9105-4da876283bbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"bf1e5536-a1eb-4a07-ba3b-1ebb0f892024","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-011000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ff857ee-ea77-4f82-92c9-a9d4358ce12b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"e0a99484-e9af-473b-8164-fff787cda335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"805c212e-0f70-4bf4-aa2a-cd43d10e8c55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-011000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"94b42237-2fd0-4ced-8e7e-9b8f5da0105d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"799ab388-3979-49ab-9f04-a4fb5c7846fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-011000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-011000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-011000 --output=json --user=testUser: exit status 83 (77.000333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"005181b0-1c4f-4568-8bbd-bfa6b27e2774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-011000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"929e173f-5ffc-404d-af31-16a864b44d13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-011000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-011000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-011000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-011000 --output=json --user=testUser: exit status 83 (44.190083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-011000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-011000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-011000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-011000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-270000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-270000 --driver=qemu2 : exit status 80 (9.731904125s)

                                                
                                                
-- stdout --
	* [first-270000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-270000" primary control-plane node in "first-270000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-270000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-270000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-270000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 11:55:12.496524 -0700 PDT m=+2472.322320626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-272000 -n second-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-272000 -n second-272000: exit status 85 (78.870375ms)

                                                
                                                
-- stdout --
	* Profile "second-272000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-272000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-272000" host is not running, skipping log retrieval (state="* Profile \"second-272000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-272000\"")
helpers_test.go:175: Cleaning up "second-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-272000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-31 11:55:12.682402 -0700 PDT m=+2472.508202835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-270000 -n first-270000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-270000 -n first-270000: exit status 7 (29.187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-270000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-270000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-270000
--- FAIL: TestMinikubeProfile (10.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-312000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-312000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.973254916s)

                                                
                                                
-- stdout --
	* [mount-start-1-312000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-312000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-312000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-312000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-312000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-312000 -n mount-start-1-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-312000 -n mount-start-1-312000: exit status 7 (66.412333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-312000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.04s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-481000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-481000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.806151042s)

                                                
                                                
-- stdout --
	* [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-481000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:55:23.037987    3505 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:55:23.038116    3505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:55:23.038120    3505 out.go:304] Setting ErrFile to fd 2...
	I0731 11:55:23.038122    3505 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:55:23.038247    3505 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:55:23.039270    3505 out.go:298] Setting JSON to false
	I0731 11:55:23.055527    3505 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3292,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:55:23.055594    3505 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:55:23.061447    3505 out.go:177] * [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:55:23.069653    3505 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:55:23.069705    3505 notify.go:220] Checking for updates...
	I0731 11:55:23.077537    3505 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:55:23.081639    3505 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:55:23.084546    3505 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:55:23.091589    3505 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:55:23.094558    3505 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:55:23.097839    3505 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:55:23.102633    3505 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:55:23.109592    3505 start.go:297] selected driver: qemu2
	I0731 11:55:23.109598    3505 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:55:23.109604    3505 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:55:23.111962    3505 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:55:23.116631    3505 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:55:23.119711    3505 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:55:23.119734    3505 cni.go:84] Creating CNI manager for ""
	I0731 11:55:23.119740    3505 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 11:55:23.119744    3505 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 11:55:23.119773    3505 start.go:340] cluster config:
	{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:55:23.123779    3505 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:55:23.130576    3505 out.go:177] * Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	I0731 11:55:23.134595    3505 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:55:23.134614    3505 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:55:23.134627    3505 cache.go:56] Caching tarball of preloaded images
	I0731 11:55:23.134691    3505 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:55:23.134699    3505 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:55:23.134940    3505 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/multinode-481000/config.json ...
	I0731 11:55:23.134951    3505 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/multinode-481000/config.json: {Name:mk69b26150597e0690aef757f79d2ec376e1ca82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:55:23.135183    3505 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:55:23.135220    3505 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "multinode-481000"
	I0731 11:55:23.135230    3505 start.go:93] Provisioning new machine with config: &{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:55:23.135260    3505 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:55:23.142661    3505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 11:55:23.161678    3505 start.go:159] libmachine.API.Create for "multinode-481000" (driver="qemu2")
	I0731 11:55:23.161706    3505 client.go:168] LocalClient.Create starting
	I0731 11:55:23.161775    3505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:55:23.161806    3505 main.go:141] libmachine: Decoding PEM data...
	I0731 11:55:23.161816    3505 main.go:141] libmachine: Parsing certificate...
	I0731 11:55:23.161856    3505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:55:23.161880    3505 main.go:141] libmachine: Decoding PEM data...
	I0731 11:55:23.161893    3505 main.go:141] libmachine: Parsing certificate...
	I0731 11:55:23.162245    3505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:55:23.307662    3505 main.go:141] libmachine: Creating SSH key...
	I0731 11:55:23.416554    3505 main.go:141] libmachine: Creating Disk image...
	I0731 11:55:23.416559    3505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:55:23.416783    3505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:23.425668    3505 main.go:141] libmachine: STDOUT: 
	I0731 11:55:23.425684    3505 main.go:141] libmachine: STDERR: 
	I0731 11:55:23.425724    3505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2 +20000M
	I0731 11:55:23.433429    3505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:55:23.433446    3505 main.go:141] libmachine: STDERR: 
	I0731 11:55:23.433459    3505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:23.433462    3505 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:55:23.433471    3505 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:55:23.433499    3505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ba:bf:e9:5d:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:23.435106    3505 main.go:141] libmachine: STDOUT: 
	I0731 11:55:23.435119    3505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:55:23.435137    3505 client.go:171] duration metric: took 273.429375ms to LocalClient.Create
	I0731 11:55:25.437277    3505 start.go:128] duration metric: took 2.302046375s to createHost
	I0731 11:55:25.437340    3505 start.go:83] releasing machines lock for "multinode-481000", held for 2.302160208s
	W0731 11:55:25.437448    3505 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:55:25.448607    3505 out.go:177] * Deleting "multinode-481000" in qemu2 ...
	W0731 11:55:25.477476    3505 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:55:25.477499    3505 start.go:729] Will try again in 5 seconds ...
	I0731 11:55:30.479612    3505 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:55:30.480011    3505 start.go:364] duration metric: took 333.542µs to acquireMachinesLock for "multinode-481000"
	I0731 11:55:30.480140    3505 start.go:93] Provisioning new machine with config: &{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:55:30.480416    3505 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:55:30.497934    3505 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 11:55:30.550732    3505 start.go:159] libmachine.API.Create for "multinode-481000" (driver="qemu2")
	I0731 11:55:30.550781    3505 client.go:168] LocalClient.Create starting
	I0731 11:55:30.550910    3505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:55:30.550979    3505 main.go:141] libmachine: Decoding PEM data...
	I0731 11:55:30.550997    3505 main.go:141] libmachine: Parsing certificate...
	I0731 11:55:30.551066    3505 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:55:30.551116    3505 main.go:141] libmachine: Decoding PEM data...
	I0731 11:55:30.551133    3505 main.go:141] libmachine: Parsing certificate...
	I0731 11:55:30.551634    3505 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:55:30.708718    3505 main.go:141] libmachine: Creating SSH key...
	I0731 11:55:30.750529    3505 main.go:141] libmachine: Creating Disk image...
	I0731 11:55:30.750538    3505 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:55:30.750746    3505 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:30.759859    3505 main.go:141] libmachine: STDOUT: 
	I0731 11:55:30.759879    3505 main.go:141] libmachine: STDERR: 
	I0731 11:55:30.759938    3505 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2 +20000M
	I0731 11:55:30.767633    3505 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:55:30.767648    3505 main.go:141] libmachine: STDERR: 
	I0731 11:55:30.767659    3505 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:30.767664    3505 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:55:30.767683    3505 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:55:30.767709    3505 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:cb:37:80:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:55:30.769217    3505 main.go:141] libmachine: STDOUT: 
	I0731 11:55:30.769231    3505 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:55:30.769250    3505 client.go:171] duration metric: took 218.458542ms to LocalClient.Create
	I0731 11:55:32.771382    3505 start.go:128] duration metric: took 2.290985041s to createHost
	I0731 11:55:32.771431    3505 start.go:83] releasing machines lock for "multinode-481000", held for 2.291443959s
	W0731 11:55:32.771790    3505 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:55:32.781352    3505 out.go:177] 
	W0731 11:55:32.791401    3505 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:55:32.791436    3505 out.go:239] * 
	* 
	W0731 11:55:32.793934    3505 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:55:32.802246    3505 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-481000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (65.194834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (107.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (123.879958ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-481000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- rollout status deployment/busybox: exit status 1 (58.108167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.664917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.51625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.124333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.4685ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.756667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.424ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.793875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.043459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.697292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.863083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0731 11:56:45.911190    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.61925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.158292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.146959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.093458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.2575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (29.802541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (107.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-481000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.39925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (28.751916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-481000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-481000 -v 3 --alsologtostderr: exit status 83 (40.517ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-481000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-481000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:20.081717    3591 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:20.081878    3591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.081882    3591 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:20.081884    3591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.082022    3591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:20.082252    3591 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:20.082448    3591 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:20.087453    3591 out.go:177] * The control-plane node multinode-481000 host is not running: state=Stopped
	I0731 11:57:20.090466    3591 out.go:177]   To start a cluster, run: "minikube start -p multinode-481000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-481000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (28.528916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-481000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-481000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (30.088833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-481000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-481000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-481000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (29.1625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-481000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-481000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-481000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-481000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (28.606167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status --output json --alsologtostderr: exit status 7 (28.892209ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-481000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:20.285496    3603 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:20.285661    3603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.285664    3603 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:20.285666    3603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.285780    3603 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:20.285892    3603 out.go:298] Setting JSON to true
	I0731 11:57:20.285901    3603 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:20.285954    3603 notify.go:220] Checking for updates...
	I0731 11:57:20.286086    3603 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:20.286093    3603 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:20.286292    3603 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:20.286296    3603 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:20.286298    3603 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-481000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (28.996417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 node stop m03: exit status 85 (47.581042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-481000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status: exit status 7 (29.563042ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr: exit status 7 (29.587583ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:20.422001    3611 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:20.422135    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.422139    3611 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:20.422142    3611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.422272    3611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:20.422386    3611 out.go:298] Setting JSON to false
	I0731 11:57:20.422395    3611 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:20.422464    3611 notify.go:220] Checking for updates...
	I0731 11:57:20.422586    3611 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:20.422593    3611 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:20.422790    3611 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:20.422793    3611 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:20.422795    3611 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr": multinode-481000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (28.732916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 node start m03 -v=7 --alsologtostderr: exit status 85 (44.59475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:20.481191    3615 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:20.481417    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.481421    3615 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:20.481423    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.481549    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:20.481789    3615 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:20.481977    3615 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:20.486447    3615 out.go:177] 
	W0731 11:57:20.489455    3615 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0731 11:57:20.489464    3615 out.go:239] * 
	* 
	W0731 11:57:20.491105    3615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:57:20.492584    3615 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0731 11:57:20.481191    3615 out.go:291] Setting OutFile to fd 1 ...
I0731 11:57:20.481417    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:57:20.481421    3615 out.go:304] Setting ErrFile to fd 2...
I0731 11:57:20.481423    3615 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:57:20.481549    3615 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:57:20.481789    3615 mustload.go:65] Loading cluster: multinode-481000
I0731 11:57:20.481977    3615 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:57:20.486447    3615 out.go:177] 
W0731 11:57:20.489455    3615 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0731 11:57:20.489464    3615 out.go:239] * 
* 
W0731 11:57:20.491105    3615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0731 11:57:20.492584    3615 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-481000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (29.939417ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:20.525668    3617 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:20.525824    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.525827    3617 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:20.525829    3617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:20.525972    3617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:20.526083    3617 out.go:298] Setting JSON to false
	I0731 11:57:20.526092    3617 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:20.526139    3617 notify.go:220] Checking for updates...
	I0731 11:57:20.526282    3617 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:20.526289    3617 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:20.526496    3617 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:20.526500    3617 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:20.526502    3617 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (73.380917ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:21.561667    3619 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:21.561865    3619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:21.561869    3619 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:21.561873    3619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:21.562051    3619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:21.562224    3619 out.go:298] Setting JSON to false
	I0731 11:57:21.562237    3619 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:21.562279    3619 notify.go:220] Checking for updates...
	I0731 11:57:21.562508    3619 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:21.562517    3619 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:21.562819    3619 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:21.562824    3619 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:21.562827    3619 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (72.236584ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:23.783379    3621 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:23.783599    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:23.783604    3621 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:23.783608    3621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:23.783808    3621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:23.783964    3621 out.go:298] Setting JSON to false
	I0731 11:57:23.783975    3621 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:23.784021    3621 notify.go:220] Checking for updates...
	I0731 11:57:23.784262    3621 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:23.784271    3621 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:23.784563    3621 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:23.784568    3621 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:23.784571    3621 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (72.606791ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:26.188932    3623 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:26.189148    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:26.189152    3623 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:26.189155    3623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:26.189317    3623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:26.189479    3623 out.go:298] Setting JSON to false
	I0731 11:57:26.189489    3623 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:26.189522    3623 notify.go:220] Checking for updates...
	I0731 11:57:26.189736    3623 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:26.189744    3623 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:26.190014    3623 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:26.190019    3623 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:26.190021    3623 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (71.476292ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:30.108538    3625 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:30.108743    3625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:30.108747    3625 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:30.108750    3625 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:30.108929    3625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:30.109099    3625 out.go:298] Setting JSON to false
	I0731 11:57:30.109110    3625 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:30.109156    3625 notify.go:220] Checking for updates...
	I0731 11:57:30.109381    3625 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:30.109390    3625 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:30.109660    3625 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:30.109664    3625 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:30.109667    3625 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (72.528417ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:37.650800    3630 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:37.650993    3630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:37.650997    3630 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:37.651000    3630 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:37.651228    3630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:37.651385    3630 out.go:298] Setting JSON to false
	I0731 11:57:37.651396    3630 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:37.651429    3630 notify.go:220] Checking for updates...
	I0731 11:57:37.651635    3630 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:37.651646    3630 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:37.651958    3630 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:37.651962    3630 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:37.651965    3630 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (74.513875ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:57:44.545424    3632 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:57:44.545645    3632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:44.545650    3632 out.go:304] Setting ErrFile to fd 2...
	I0731 11:57:44.545654    3632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:57:44.545845    3632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:57:44.545997    3632 out.go:298] Setting JSON to false
	I0731 11:57:44.546011    3632 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:57:44.546052    3632 notify.go:220] Checking for updates...
	I0731 11:57:44.546275    3632 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:57:44.546286    3632 status.go:255] checking status of multinode-481000 ...
	I0731 11:57:44.546566    3632 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:57:44.546571    3632 status.go:343] host is not running, skipping remaining checks
	I0731 11:57:44.546574    3632 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr: exit status 7 (73.997791ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:01.326742    3639 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:01.326954    3639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:01.326959    3639 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:01.326963    3639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:01.327151    3639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:01.327341    3639 out.go:298] Setting JSON to false
	I0731 11:58:01.327353    3639 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:58:01.327399    3639 notify.go:220] Checking for updates...
	I0731 11:58:01.327632    3639 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:01.327641    3639 status.go:255] checking status of multinode-481000 ...
	I0731 11:58:01.327933    3639 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:58:01.327938    3639 status.go:343] host is not running, skipping remaining checks
	I0731 11:58:01.327941    3639 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-481000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (32.683167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (40.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-481000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-481000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-481000: (3.186765708s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-481000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-481000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219759125s)

                                                
                                                
-- stdout --
	* [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	* Restarting existing qemu2 VM for "multinode-481000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-481000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:04.640500    3663 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:04.640701    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:04.640705    3663 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:04.640709    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:04.640870    3663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:04.642104    3663 out.go:298] Setting JSON to false
	I0731 11:58:04.661601    3663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3453,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:58:04.661670    3663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:58:04.665761    3663 out.go:177] * [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:58:04.673769    3663 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:58:04.673846    3663 notify.go:220] Checking for updates...
	I0731 11:58:04.679726    3663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:58:04.682740    3663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:58:04.685678    3663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:04.688717    3663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:58:04.691773    3663 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:04.695067    3663 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:04.695118    3663 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:58:04.699691    3663 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 11:58:04.706652    3663 start.go:297] selected driver: qemu2
	I0731 11:58:04.706659    3663 start.go:901] validating driver "qemu2" against &{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:58:04.706732    3663 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:04.709043    3663 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:58:04.709065    3663 cni.go:84] Creating CNI manager for ""
	I0731 11:58:04.709071    3663 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 11:58:04.709117    3663 start.go:340] cluster config:
	{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:58:04.712643    3663 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:04.719739    3663 out.go:177] * Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	I0731 11:58:04.723719    3663 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:58:04.723735    3663 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:58:04.723747    3663 cache.go:56] Caching tarball of preloaded images
	I0731 11:58:04.723818    3663 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:58:04.723824    3663 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:58:04.723878    3663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/multinode-481000/config.json ...
	I0731 11:58:04.724222    3663 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:04.724257    3663 start.go:364] duration metric: took 28.917µs to acquireMachinesLock for "multinode-481000"
	I0731 11:58:04.724266    3663 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:58:04.724272    3663 fix.go:54] fixHost starting: 
	I0731 11:58:04.724394    3663 fix.go:112] recreateIfNeeded on multinode-481000: state=Stopped err=<nil>
	W0731 11:58:04.724404    3663 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:58:04.732694    3663 out.go:177] * Restarting existing qemu2 VM for "multinode-481000" ...
	I0731 11:58:04.736701    3663 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:04.736735    3663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:cb:37:80:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:58:04.738782    3663 main.go:141] libmachine: STDOUT: 
	I0731 11:58:04.738804    3663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:04.738835    3663 fix.go:56] duration metric: took 14.562834ms for fixHost
	I0731 11:58:04.738840    3663 start.go:83] releasing machines lock for "multinode-481000", held for 14.579083ms
	W0731 11:58:04.738847    3663 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:58:04.738883    3663 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:04.738888    3663 start.go:729] Will try again in 5 seconds ...
	I0731 11:58:09.740918    3663 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:09.741276    3663 start.go:364] duration metric: took 271.25µs to acquireMachinesLock for "multinode-481000"
	I0731 11:58:09.741403    3663 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:58:09.741424    3663 fix.go:54] fixHost starting: 
	I0731 11:58:09.742063    3663 fix.go:112] recreateIfNeeded on multinode-481000: state=Stopped err=<nil>
	W0731 11:58:09.742092    3663 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:58:09.750494    3663 out.go:177] * Restarting existing qemu2 VM for "multinode-481000" ...
	I0731 11:58:09.754443    3663 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:09.754729    3663 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:cb:37:80:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:58:09.763399    3663 main.go:141] libmachine: STDOUT: 
	I0731 11:58:09.763450    3663 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:09.763518    3663 fix.go:56] duration metric: took 22.099917ms for fixHost
	I0731 11:58:09.763535    3663 start.go:83] releasing machines lock for "multinode-481000", held for 22.240458ms
	W0731 11:58:09.763693    3663 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:09.770382    3663 out.go:177] 
	W0731 11:58:09.774452    3663 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:58:09.774551    3663 out.go:239] * 
	* 
	W0731 11:58:09.776939    3663 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:58:09.785437    3663 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-481000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-481000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (33.206375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 node delete m03: exit status 83 (39.246166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-481000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-481000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-481000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr: exit status 7 (29.6745ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:09.970638    3677 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:09.970790    3677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:09.970793    3677 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:09.970795    3677 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:09.970937    3677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:09.971052    3677 out.go:298] Setting JSON to false
	I0731 11:58:09.971061    3677 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:58:09.971122    3677 notify.go:220] Checking for updates...
	I0731 11:58:09.971244    3677 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:09.971251    3677 status.go:255] checking status of multinode-481000 ...
	I0731 11:58:09.971465    3677 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:58:09.971469    3677 status.go:343] host is not running, skipping remaining checks
	I0731 11:58:09.971471    3677 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (29.223792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-481000 stop: (2.623501625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status: exit status 7 (63.12025ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr: exit status 7 (32.419583ms)

                                                
                                                
-- stdout --
	multinode-481000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:12.719427    3701 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:12.719578    3701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:12.719581    3701 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:12.719584    3701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:12.719731    3701 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:12.719847    3701 out.go:298] Setting JSON to false
	I0731 11:58:12.719858    3701 mustload.go:65] Loading cluster: multinode-481000
	I0731 11:58:12.719914    3701 notify.go:220] Checking for updates...
	I0731 11:58:12.720031    3701 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:12.720038    3701 status.go:255] checking status of multinode-481000 ...
	I0731 11:58:12.720248    3701 status.go:330] multinode-481000 host status = "Stopped" (err=<nil>)
	I0731 11:58:12.720252    3701 status.go:343] host is not running, skipping remaining checks
	I0731 11:58:12.720255    3701 status.go:257] multinode-481000 status: &{Name:multinode-481000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr": multinode-481000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-481000 status --alsologtostderr": multinode-481000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (29.923708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-481000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-481000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.18934225s)

                                                
                                                
-- stdout --
	* [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	* Restarting existing qemu2 VM for "multinode-481000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-481000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:12.778606    3705 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:12.778735    3705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:12.778739    3705 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:12.778741    3705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:12.778870    3705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:12.779904    3705 out.go:298] Setting JSON to false
	I0731 11:58:12.795829    3705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3461,"bootTime":1722448831,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:58:12.795907    3705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:58:12.800571    3705 out.go:177] * [multinode-481000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:58:12.807515    3705 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:58:12.807537    3705 notify.go:220] Checking for updates...
	I0731 11:58:12.814452    3705 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:58:12.817470    3705 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:58:12.821484    3705 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:12.824507    3705 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:58:12.827407    3705 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:12.830732    3705 config.go:182] Loaded profile config "multinode-481000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:12.830982    3705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:58:12.835485    3705 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 11:58:12.842433    3705 start.go:297] selected driver: qemu2
	I0731 11:58:12.842439    3705 start.go:901] validating driver "qemu2" against &{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:58:12.842492    3705 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:12.844870    3705 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:58:12.844911    3705 cni.go:84] Creating CNI manager for ""
	I0731 11:58:12.844915    3705 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 11:58:12.844959    3705 start.go:340] cluster config:
	{Name:multinode-481000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-481000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:58:12.848624    3705 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:12.857441    3705 out.go:177] * Starting "multinode-481000" primary control-plane node in "multinode-481000" cluster
	I0731 11:58:12.861460    3705 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:58:12.861477    3705 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:58:12.861491    3705 cache.go:56] Caching tarball of preloaded images
	I0731 11:58:12.861544    3705 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 11:58:12.861553    3705 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 11:58:12.861617    3705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/multinode-481000/config.json ...
	I0731 11:58:12.862061    3705 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:12.862090    3705 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "multinode-481000"
	I0731 11:58:12.862098    3705 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:58:12.862104    3705 fix.go:54] fixHost starting: 
	I0731 11:58:12.862227    3705 fix.go:112] recreateIfNeeded on multinode-481000: state=Stopped err=<nil>
	W0731 11:58:12.862238    3705 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:58:12.870499    3705 out.go:177] * Restarting existing qemu2 VM for "multinode-481000" ...
	I0731 11:58:12.874418    3705 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:12.874457    3705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:cb:37:80:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:58:12.876484    3705 main.go:141] libmachine: STDOUT: 
	I0731 11:58:12.876505    3705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:12.876533    3705 fix.go:56] duration metric: took 14.430125ms for fixHost
	I0731 11:58:12.876537    3705 start.go:83] releasing machines lock for "multinode-481000", held for 14.443166ms
	W0731 11:58:12.876545    3705 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:58:12.876580    3705 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:12.876585    3705 start.go:729] Will try again in 5 seconds ...
	I0731 11:58:17.878024    3705 start.go:360] acquireMachinesLock for multinode-481000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:17.878370    3705 start.go:364] duration metric: took 274.708µs to acquireMachinesLock for "multinode-481000"
	I0731 11:58:17.878471    3705 start.go:96] Skipping create...Using existing machine configuration
	I0731 11:58:17.878486    3705 fix.go:54] fixHost starting: 
	I0731 11:58:17.879215    3705 fix.go:112] recreateIfNeeded on multinode-481000: state=Stopped err=<nil>
	W0731 11:58:17.879238    3705 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 11:58:17.887537    3705 out.go:177] * Restarting existing qemu2 VM for "multinode-481000" ...
	I0731 11:58:17.891513    3705 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:17.891722    3705 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:18:cb:37:80:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/multinode-481000/disk.qcow2
	I0731 11:58:17.900564    3705 main.go:141] libmachine: STDOUT: 
	I0731 11:58:17.900615    3705 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:17.900678    3705 fix.go:56] duration metric: took 22.188ms for fixHost
	I0731 11:58:17.900701    3705 start.go:83] releasing machines lock for "multinode-481000", held for 22.312792ms
	W0731 11:58:17.900895    3705 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-481000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:17.908534    3705 out.go:177] 
	W0731 11:58:17.912622    3705 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:58:17.912724    3705 out.go:239] * 
	* 
	W0731 11:58:17.915121    3705 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:58:17.927565    3705 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-481000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (66.51175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-481000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-481000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-481000-m01 --driver=qemu2 : exit status 80 (9.814978583s)

                                                
                                                
-- stdout --
	* [multinode-481000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-481000-m01" primary control-plane node in "multinode-481000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-481000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-481000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-481000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-481000-m02 --driver=qemu2 : exit status 80 (10.018849667s)

                                                
                                                
-- stdout --
	* [multinode-481000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-481000-m02" primary control-plane node in "multinode-481000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-481000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-481000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-481000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-481000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-481000: exit status 83 (80.26425ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-481000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-481000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-481000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-481000 -n multinode-481000: exit status 7 (30.009834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-481000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.06s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-783000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0731 11:58:47.007318    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-783000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.909179291s)

                                                
                                                
-- stdout --
	* [test-preload-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-783000" primary control-plane node in "test-preload-783000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-783000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:58:38.197442    3757 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:58:38.197580    3757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:38.197584    3757 out.go:304] Setting ErrFile to fd 2...
	I0731 11:58:38.197586    3757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:58:38.197704    3757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:58:38.198646    3757 out.go:298] Setting JSON to false
	I0731 11:58:38.214474    3757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3487,"bootTime":1722448831,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:58:38.214542    3757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:58:38.220980    3757 out.go:177] * [test-preload-783000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:58:38.228882    3757 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:58:38.228940    3757 notify.go:220] Checking for updates...
	I0731 11:58:38.235851    3757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:58:38.238870    3757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:58:38.242910    3757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:58:38.245835    3757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:58:38.248915    3757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:58:38.252141    3757 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:58:38.252189    3757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:58:38.256852    3757 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 11:58:38.263933    3757 start.go:297] selected driver: qemu2
	I0731 11:58:38.263940    3757 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:58:38.263947    3757 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:58:38.266331    3757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:58:38.269807    3757 out.go:177] * Automatically selected the socket_vmnet network
	I0731 11:58:38.272859    3757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 11:58:38.272886    3757 cni.go:84] Creating CNI manager for ""
	I0731 11:58:38.272893    3757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:58:38.272897    3757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:58:38.272927    3757 start.go:340] cluster config:
	{Name:test-preload-783000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:58:38.276675    3757 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.283903    3757 out.go:177] * Starting "test-preload-783000" primary control-plane node in "test-preload-783000" cluster
	I0731 11:58:38.287877    3757 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0731 11:58:38.287983    3757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/test-preload-783000/config.json ...
	I0731 11:58:38.288008    3757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/test-preload-783000/config.json: {Name:mkc59949b3ad66f321d9f8b5ddf963c23ef1a2a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:58:38.288015    3757 cache.go:107] acquiring lock: {Name:mk1d12ca53e45b3e8b9e16d35f7498ea0f4170fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288018    3757 cache.go:107] acquiring lock: {Name:mk9e125329ccbda5888ca49c561bfe2f609a525b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288039    3757 cache.go:107] acquiring lock: {Name:mk3b60a49ddebb593be101aaf0564943c14c64d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288113    3757 cache.go:107] acquiring lock: {Name:mkc08b7071292a5d046478e4beca97487b884686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288184    3757 cache.go:107] acquiring lock: {Name:mk7e16d7cff943fdc7fa8651035d0b8eef51fba1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288227    3757 cache.go:107] acquiring lock: {Name:mk2685389d1d482902caaf5f263a29676d00f913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288262    3757 cache.go:107] acquiring lock: {Name:mke4839882e363308834d4b45af532de57c1100f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288361    3757 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 11:58:38.288372    3757 cache.go:107] acquiring lock: {Name:mkd696de32fe4e13267913f26c9b5d6c4e0637a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:58:38.288455    3757 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:58:38.288454    3757 start.go:360] acquireMachinesLock for test-preload-783000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:38.288489    3757 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 11:58:38.288498    3757 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 11:58:38.288541    3757 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 11:58:38.288544    3757 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 11:58:38.288613    3757 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 11:58:38.288363    3757 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 11:58:38.288598    3757 start.go:364] duration metric: took 121.541µs to acquireMachinesLock for "test-preload-783000"
	I0731 11:58:38.288708    3757 start.go:93] Provisioning new machine with config: &{Name:test-preload-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:58:38.288756    3757 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:58:38.291953    3757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 11:58:38.295067    3757 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 11:58:38.295132    3757 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 11:58:38.295219    3757 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 11:58:38.296549    3757 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 11:58:38.296524    3757 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 11:58:38.296684    3757 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 11:58:38.296629    3757 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 11:58:38.296712    3757 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 11:58:38.310250    3757 start.go:159] libmachine.API.Create for "test-preload-783000" (driver="qemu2")
	I0731 11:58:38.310275    3757 client.go:168] LocalClient.Create starting
	I0731 11:58:38.310369    3757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:58:38.310400    3757 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:38.310411    3757 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:38.310465    3757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:58:38.310488    3757 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:38.310495    3757 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:38.310847    3757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:58:38.456571    3757 main.go:141] libmachine: Creating SSH key...
	I0731 11:58:38.563435    3757 main.go:141] libmachine: Creating Disk image...
	I0731 11:58:38.563463    3757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:58:38.563805    3757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:38.573864    3757 main.go:141] libmachine: STDOUT: 
	I0731 11:58:38.573889    3757 main.go:141] libmachine: STDERR: 
	I0731 11:58:38.573938    3757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2 +20000M
	I0731 11:58:38.583315    3757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:58:38.583333    3757 main.go:141] libmachine: STDERR: 
	I0731 11:58:38.583346    3757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:38.583350    3757 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:58:38.583361    3757 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:38.583387    3757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c3:67:e0:d5:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:38.585304    3757 main.go:141] libmachine: STDOUT: 
	I0731 11:58:38.585324    3757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:38.585341    3757 client.go:171] duration metric: took 275.065917ms to LocalClient.Create
	I0731 11:58:38.811998    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W0731 11:58:38.822001    3757 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 11:58:38.822020    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 11:58:38.826298    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 11:58:38.844595    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 11:58:38.884549    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 11:58:38.937754    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 11:58:38.949940    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 11:58:38.953482    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0731 11:58:38.953505    3757 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 665.431125ms
	I0731 11:58:38.953526    3757 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0731 11:58:39.362606    3757 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 11:58:39.362724    3757 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 11:58:39.644165    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 11:58:39.644217    3757 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.356233375s
	I0731 11:58:39.644242    3757 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 11:58:40.470839    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0731 11:58:40.470888    3757 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.182719083s
	I0731 11:58:40.470946    3757 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0731 11:58:40.585501    3757 start.go:128] duration metric: took 2.296770459s to createHost
	I0731 11:58:40.585559    3757 start.go:83] releasing machines lock for "test-preload-783000", held for 2.296905041s
	W0731 11:58:40.585618    3757 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:40.602470    3757 out.go:177] * Deleting "test-preload-783000" in qemu2 ...
	W0731 11:58:40.632277    3757 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:40.632315    3757 start.go:729] Will try again in 5 seconds ...
	I0731 11:58:41.177648    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0731 11:58:41.177696    3757 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.88959625s
	I0731 11:58:41.177719    3757 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0731 11:58:41.399510    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0731 11:58:41.399555    3757 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.111610792s
	I0731 11:58:41.399602    3757 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0731 11:58:43.053409    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0731 11:58:43.053466    3757 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.765192458s
	I0731 11:58:43.053497    3757 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0731 11:58:43.431203    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0731 11:58:43.431254    3757 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.143363375s
	I0731 11:58:43.431281    3757 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0731 11:58:45.632531    3757 start.go:360] acquireMachinesLock for test-preload-783000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 11:58:45.632973    3757 start.go:364] duration metric: took 355.833µs to acquireMachinesLock for "test-preload-783000"
	I0731 11:58:45.633095    3757 start.go:93] Provisioning new machine with config: &{Name:test-preload-783000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-783000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 11:58:45.633349    3757 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 11:58:45.637916    3757 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 11:58:45.687342    3757 start.go:159] libmachine.API.Create for "test-preload-783000" (driver="qemu2")
	I0731 11:58:45.687523    3757 client.go:168] LocalClient.Create starting
	I0731 11:58:45.687650    3757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 11:58:45.687710    3757 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:45.687737    3757 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:45.687802    3757 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 11:58:45.687852    3757 main.go:141] libmachine: Decoding PEM data...
	I0731 11:58:45.687877    3757 main.go:141] libmachine: Parsing certificate...
	I0731 11:58:45.688379    3757 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 11:58:45.845910    3757 main.go:141] libmachine: Creating SSH key...
	I0731 11:58:46.011348    3757 main.go:141] libmachine: Creating Disk image...
	I0731 11:58:46.011355    3757 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 11:58:46.011612    3757 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:46.021227    3757 main.go:141] libmachine: STDOUT: 
	I0731 11:58:46.021319    3757 main.go:141] libmachine: STDERR: 
	I0731 11:58:46.021372    3757 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2 +20000M
	I0731 11:58:46.029456    3757 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 11:58:46.029472    3757 main.go:141] libmachine: STDERR: 
	I0731 11:58:46.029483    3757 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:46.029488    3757 main.go:141] libmachine: Starting QEMU VM...
	I0731 11:58:46.029506    3757 qemu.go:418] Using hvf for hardware acceleration
	I0731 11:58:46.029547    3757 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:27:75:52:e9:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/test-preload-783000/disk.qcow2
	I0731 11:58:46.031274    3757 main.go:141] libmachine: STDOUT: 
	I0731 11:58:46.031323    3757 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 11:58:46.031337    3757 client.go:171] duration metric: took 343.815667ms to LocalClient.Create
	I0731 11:58:47.431193    3757 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0731 11:58:47.431270    3757 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.143306833s
	I0731 11:58:47.431294    3757 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0731 11:58:47.431341    3757 cache.go:87] Successfully saved all images to host disk.
	I0731 11:58:48.033489    3757 start.go:128] duration metric: took 2.40015625s to createHost
	I0731 11:58:48.033529    3757 start.go:83] releasing machines lock for "test-preload-783000", held for 2.400577625s
	W0731 11:58:48.033831    3757 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-783000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 11:58:48.043340    3757 out.go:177] 
	W0731 11:58:48.051378    3757 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 11:58:48.051406    3757 out.go:239] * 
	* 
	W0731 11:58:48.054272    3757 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:58:48.064330    3757 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-783000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-31 11:58:48.081061 -0700 PDT m=+2687.911537085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-783000 -n test-preload-783000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-783000 -n test-preload-783000: exit status 7 (67.281125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-783000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-783000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-783000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (10.1s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-505000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-505000 --memory=2048 --driver=qemu2 : exit status 80 (9.945828041s)

                                                
                                                
-- stdout --
	* [scheduled-stop-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-505000" primary control-plane node in "scheduled-stop-505000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-505000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-505000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-505000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-505000" primary control-plane node in "scheduled-stop-505000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-505000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-505000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-31 11:58:58.173921 -0700 PDT m=+2698.004616751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-505000 -n scheduled-stop-505000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-505000 -n scheduled-stop-505000: exit status 7 (70.386708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-505000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-505000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-505000
--- FAIL: TestScheduledStopUnix (10.10s)

                                                
                                    
x
+
TestSkaffold (12.35s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2489050655 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2489050655 version: (1.065840292s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-137000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-137000 --memory=2600 --driver=qemu2 : exit status 80 (9.732383917s)

                                                
                                                
-- stdout --
	* [skaffold-137000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-137000" primary control-plane node in "skaffold-137000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-137000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-137000" primary control-plane node in "skaffold-137000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-137000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-137000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-31 11:59:10.532841 -0700 PDT m=+2710.363804418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-137000 -n skaffold-137000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-137000 -n skaffold-137000: exit status 7 (61.273542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-137000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-137000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-137000
--- FAIL: TestSkaffold (12.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (602s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2038113708 start -p running-upgrade-334000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2038113708 start -p running-upgrade-334000 --memory=2200 --vm-driver=qemu2 : (53.459226042s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-334000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0731 12:01:45.904821    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 12:01:50.074459    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-334000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m34.07815525s)

                                                
                                                
-- stdout --
	* [running-upgrade-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-334000" primary control-plane node in "running-upgrade-334000" cluster
	* Updating the running qemu2 "running-upgrade-334000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:00:51.360409    4422 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:00:51.360785    4422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:00:51.360791    4422 out.go:304] Setting ErrFile to fd 2...
	I0731 12:00:51.360794    4422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:00:51.360990    4422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:00:51.362328    4422 out.go:298] Setting JSON to false
	I0731 12:00:51.379312    4422 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3620,"bootTime":1722448831,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:00:51.379390    4422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:00:51.384504    4422 out.go:177] * [running-upgrade-334000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:00:51.390441    4422 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:00:51.390520    4422 notify.go:220] Checking for updates...
	I0731 12:00:51.398414    4422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:00:51.402410    4422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:00:51.405444    4422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:00:51.410363    4422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:00:51.417434    4422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:00:51.420835    4422 config.go:182] Loaded profile config "running-upgrade-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:00:51.425394    4422 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:00:51.428451    4422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:00:51.432436    4422 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:00:51.438468    4422 start.go:297] selected driver: qemu2
	I0731 12:00:51.438480    4422 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:00:51.438547    4422 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:00:51.441034    4422 cni.go:84] Creating CNI manager for ""
	I0731 12:00:51.441050    4422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:00:51.441076    4422 start.go:340] cluster config:
	{Name:running-upgrade-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:00:51.441125    4422 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:00:51.449457    4422 out.go:177] * Starting "running-upgrade-334000" primary control-plane node in "running-upgrade-334000" cluster
	I0731 12:00:51.453475    4422 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:00:51.453488    4422 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:00:51.453496    4422 cache.go:56] Caching tarball of preloaded images
	I0731 12:00:51.453556    4422 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:00:51.453562    4422 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:00:51.453615    4422 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/config.json ...
	I0731 12:00:51.454059    4422 start.go:360] acquireMachinesLock for running-upgrade-334000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:00:51.454096    4422 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "running-upgrade-334000"
	I0731 12:00:51.454104    4422 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:00:51.454109    4422 fix.go:54] fixHost starting: 
	I0731 12:00:51.454716    4422 fix.go:112] recreateIfNeeded on running-upgrade-334000: state=Running err=<nil>
	W0731 12:00:51.454725    4422 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:00:51.463445    4422 out.go:177] * Updating the running qemu2 "running-upgrade-334000" VM ...
	I0731 12:00:51.467298    4422 machine.go:94] provisionDockerMachine start ...
	I0731 12:00:51.467329    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.467441    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.467446    4422 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:00:51.526637    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-334000
	
	I0731 12:00:51.526654    4422 buildroot.go:166] provisioning hostname "running-upgrade-334000"
	I0731 12:00:51.526703    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.526830    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.526836    4422 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-334000 && echo "running-upgrade-334000" | sudo tee /etc/hostname
	I0731 12:00:51.591140    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-334000
	
	I0731 12:00:51.591188    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.591309    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.591317    4422 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-334000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-334000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-334000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:00:51.651625    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:00:51.651636    4422 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19356-1202/.minikube CaCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19356-1202/.minikube}
	I0731 12:00:51.651644    4422 buildroot.go:174] setting up certificates
	I0731 12:00:51.651648    4422 provision.go:84] configureAuth start
	I0731 12:00:51.651655    4422 provision.go:143] copyHostCerts
	I0731 12:00:51.651730    4422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem, removing ...
	I0731 12:00:51.651736    4422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem
	I0731 12:00:51.651859    4422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem (1082 bytes)
	I0731 12:00:51.652042    4422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem, removing ...
	I0731 12:00:51.652046    4422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem
	I0731 12:00:51.652105    4422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem (1123 bytes)
	I0731 12:00:51.652210    4422 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem, removing ...
	I0731 12:00:51.652214    4422 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem
	I0731 12:00:51.652273    4422 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem (1679 bytes)
	I0731 12:00:51.652361    4422 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-334000 san=[127.0.0.1 localhost minikube running-upgrade-334000]
	I0731 12:00:51.711397    4422 provision.go:177] copyRemoteCerts
	I0731 12:00:51.711436    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:00:51.711446    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:00:51.743451    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:00:51.750046    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:00:51.757339    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 12:00:51.764323    4422 provision.go:87] duration metric: took 112.673167ms to configureAuth
	I0731 12:00:51.764332    4422 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:00:51.764428    4422 config.go:182] Loaded profile config "running-upgrade-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:00:51.764467    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.764559    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.764564    4422 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:00:51.826792    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:00:51.826803    4422 buildroot.go:70] root file system type: tmpfs
	I0731 12:00:51.826856    4422 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:00:51.826902    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.827020    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.827052    4422 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:00:51.891850    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:00:51.891908    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:51.892022    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:51.892030    4422 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:00:51.953041    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:00:51.953052    4422 machine.go:97] duration metric: took 485.758792ms to provisionDockerMachine
	I0731 12:00:51.953058    4422 start.go:293] postStartSetup for "running-upgrade-334000" (driver="qemu2")
	I0731 12:00:51.953065    4422 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:00:51.953114    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:00:51.953122    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:00:51.985719    4422 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:00:51.986959    4422 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:00:51.986967    4422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/addons for local assets ...
	I0731 12:00:51.987055    4422 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/files for local assets ...
	I0731 12:00:51.987169    4422 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem -> 17012.pem in /etc/ssl/certs
	I0731 12:00:51.987296    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:00:51.989898    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:00:52.001851    4422 start.go:296] duration metric: took 48.786125ms for postStartSetup
	I0731 12:00:52.001873    4422 fix.go:56] duration metric: took 547.776375ms for fixHost
	I0731 12:00:52.001927    4422 main.go:141] libmachine: Using SSH client type: native
	I0731 12:00:52.002047    4422 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10331aa10] 0x10331d270 <nil>  [] 0s} localhost 50249 <nil> <nil>}
	I0731 12:00:52.002054    4422 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:00:52.061919    4422 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722452451.660647513
	
	I0731 12:00:52.061928    4422 fix.go:216] guest clock: 1722452451.660647513
	I0731 12:00:52.061933    4422 fix.go:229] Guest: 2024-07-31 12:00:51.660647513 -0700 PDT Remote: 2024-07-31 12:00:52.001874 -0700 PDT m=+0.660607084 (delta=-341.226487ms)
	I0731 12:00:52.061943    4422 fix.go:200] guest clock delta is within tolerance: -341.226487ms
	I0731 12:00:52.061946    4422 start.go:83] releasing machines lock for "running-upgrade-334000", held for 607.859209ms
	I0731 12:00:52.062004    4422 ssh_runner.go:195] Run: cat /version.json
	I0731 12:00:52.062015    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:00:52.062004    4422 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:00:52.062061    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	W0731 12:00:52.062578    4422 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50249: connect: connection refused
	I0731 12:00:52.062595    4422 retry.go:31] will retry after 336.12196ms: dial tcp [::1]:50249: connect: connection refused
	W0731 12:00:52.435808    4422 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:00:52.435887    4422 ssh_runner.go:195] Run: systemctl --version
	I0731 12:00:52.437948    4422 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:00:52.439870    4422 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:00:52.439901    4422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:00:52.443265    4422 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:00:52.448263    4422 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:00:52.448274    4422 start.go:495] detecting cgroup driver to use...
	I0731 12:00:52.448345    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:00:52.453755    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:00:52.456797    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:00:52.459599    4422 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:00:52.459621    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:00:52.462486    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:00:52.466043    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:00:52.469660    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:00:52.473282    4422 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:00:52.476212    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:00:52.478985    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:00:52.482308    4422 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:00:52.485553    4422 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:00:52.488128    4422 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:00:52.490838    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:00:52.577853    4422 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:00:52.589058    4422 start.go:495] detecting cgroup driver to use...
	I0731 12:00:52.589123    4422 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:00:52.594412    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:00:52.599343    4422 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:00:52.609145    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:00:52.613663    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:00:52.618113    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:00:52.624003    4422 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:00:52.625387    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:00:52.628247    4422 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:00:52.632869    4422 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:00:52.722886    4422 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:00:52.804340    4422 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:00:52.804396    4422 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:00:52.809843    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:00:52.909356    4422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:01:06.583975    4422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.6748995s)
	I0731 12:01:06.584044    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:01:06.588939    4422 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0731 12:01:06.595701    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:01:06.602214    4422 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:01:06.686770    4422 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:01:06.775794    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:01:06.854617    4422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:01:06.861479    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:01:06.866342    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:01:06.960166    4422 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:01:06.999092    4422 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:01:06.999162    4422 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:01:07.001115    4422 start.go:563] Will wait 60s for crictl version
	I0731 12:01:07.001152    4422 ssh_runner.go:195] Run: which crictl
	I0731 12:01:07.002622    4422 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:01:07.014572    4422 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:01:07.014641    4422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:01:07.027743    4422 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:01:07.047346    4422 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:01:07.047468    4422 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:01:07.048857    4422 kubeadm.go:883] updating cluster {Name:running-upgrade-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:01:07.048900    4422 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:01:07.048938    4422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:01:07.059433    4422 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:01:07.059443    4422 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:01:07.059483    4422 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:01:07.062607    4422 ssh_runner.go:195] Run: which lz4
	I0731 12:01:07.063866    4422 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:01:07.065047    4422 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:01:07.065055    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:01:07.927751    4422 docker.go:649] duration metric: took 863.931625ms to copy over tarball
	I0731 12:01:07.927805    4422 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:01:09.445562    4422 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.517775459s)
	I0731 12:01:09.445577    4422 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:01:09.461727    4422 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:01:09.465249    4422 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:01:09.470057    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:01:09.551181    4422 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:01:10.747442    4422 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.19627025s)
	I0731 12:01:10.747530    4422 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:01:10.758712    4422 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:01:10.758724    4422 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:01:10.758729    4422 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:01:10.762635    4422 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:01:10.764249    4422 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:01:10.766314    4422 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:01:10.766390    4422 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:01:10.767797    4422 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:01:10.767918    4422 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:01:10.769331    4422 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:01:10.769337    4422 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:01:10.770589    4422 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:01:10.770666    4422 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:01:10.771787    4422 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:01:10.772179    4422 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:01:10.772811    4422 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:01:10.773012    4422 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:01:10.773999    4422 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:01:10.774695    4422 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:01:11.153113    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:01:11.153925    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:01:11.171107    4422 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:01:11.171134    4422 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:01:11.171172    4422 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:01:11.171186    4422 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:01:11.171191    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:01:11.171215    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	W0731 12:01:11.176614    4422 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:01:11.176736    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:01:11.185774    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:01:11.185784    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:01:11.185882    4422 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:01:11.192148    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:01:11.196616    4422 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:01:11.196629    4422 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:01:11.196634    4422 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:01:11.196651    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:01:11.196676    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:01:11.198373    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:01:11.210206    4422 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:01:11.210226    4422 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:01:11.210274    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:01:11.211028    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:01:11.211126    4422 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:01:11.225625    4422 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:01:11.225643    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0731 12:01:11.227754    4422 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:01:11.227775    4422 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:01:11.227825    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:01:11.229835    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:01:11.234178    4422 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:01:11.234203    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:01:11.234486    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:01:11.254231    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:01:11.288121    4422 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:01:11.288173    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:01:11.288193    4422 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:01:11.288208    4422 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:01:11.288223    4422 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:01:11.288210    4422 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:01:11.288269    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:01:11.288280    4422 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:01:11.288270    4422 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:01:11.323754    4422 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:01:11.323768    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:01:11.324955    4422 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:01:11.324982    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:01:11.324982    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:01:11.324997    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0731 12:01:11.376513    4422 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:01:11.376626    4422 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:01:11.394434    4422 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:01:11.435645    4422 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:01:11.435675    4422 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:01:11.435732    4422 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:01:11.604808    4422 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:01:11.604822    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:01:12.212729    4422 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:01:12.212840    4422 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:01:12.213173    4422 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:01:12.218056    4422 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:01:12.218117    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:01:12.269758    4422 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:01:12.269777    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:01:12.506778    4422 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:01:12.506816    4422 cache_images.go:92] duration metric: took 1.74811825s to LoadCachedImages
	W0731 12:01:12.506852    4422 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0731 12:01:12.506858    4422 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:01:12.506907    4422 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-334000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:01:12.506988    4422 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:01:12.520372    4422 cni.go:84] Creating CNI manager for ""
	I0731 12:01:12.520383    4422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:01:12.520390    4422 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:01:12.520398    4422 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-334000 NodeName:running-upgrade-334000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:01:12.520466    4422 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-334000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:01:12.520529    4422 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:01:12.523394    4422 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:01:12.523423    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:01:12.526335    4422 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:01:12.531518    4422 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:01:12.536162    4422 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:01:12.541315    4422 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:01:12.542652    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:01:12.628319    4422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:01:12.633511    4422 certs.go:68] Setting up /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000 for IP: 10.0.2.15
	I0731 12:01:12.633517    4422 certs.go:194] generating shared ca certs ...
	I0731 12:01:12.633525    4422 certs.go:226] acquiring lock for ca certs: {Name:mkf42ffcc2bf4238c4563b7710ee6f745a9fc0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:01:12.633696    4422 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key
	I0731 12:01:12.633745    4422 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key
	I0731 12:01:12.633750    4422 certs.go:256] generating profile certs ...
	I0731 12:01:12.633821    4422 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.key
	I0731 12:01:12.633839    4422 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key.e69ead2a
	I0731 12:01:12.633848    4422 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt.e69ead2a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:01:12.670627    4422 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt.e69ead2a ...
	I0731 12:01:12.670631    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt.e69ead2a: {Name:mk32286d22089f679ca3a16c8e6cd292c6dfeec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:01:12.670870    4422 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key.e69ead2a ...
	I0731 12:01:12.670877    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key.e69ead2a: {Name:mk333bb109f98f12637c417472ff2991de2c6a98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:01:12.671011    4422 certs.go:381] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt.e69ead2a -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt
	I0731 12:01:12.671147    4422 certs.go:385] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key.e69ead2a -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key
	I0731 12:01:12.671294    4422 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/proxy-client.key
	I0731 12:01:12.671431    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem (1338 bytes)
	W0731 12:01:12.671458    4422 certs.go:480] ignoring /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701_empty.pem, impossibly tiny 0 bytes
	I0731 12:01:12.671462    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:01:12.671481    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:01:12.671498    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:01:12.671514    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem (1679 bytes)
	I0731 12:01:12.671557    4422 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:01:12.671847    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:01:12.678780    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:01:12.686880    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:01:12.694101    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:01:12.701186    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:01:12.708167    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 12:01:12.714909    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:01:12.721862    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 12:01:12.730086    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /usr/share/ca-certificates/17012.pem (1708 bytes)
	I0731 12:01:12.737571    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:01:12.744818    4422 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem --> /usr/share/ca-certificates/1701.pem (1338 bytes)
	I0731 12:01:12.751380    4422 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:01:12.756613    4422 ssh_runner.go:195] Run: openssl version
	I0731 12:01:12.758406    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17012.pem && ln -fs /usr/share/ca-certificates/17012.pem /etc/ssl/certs/17012.pem"
	I0731 12:01:12.762131    4422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17012.pem
	I0731 12:01:12.763819    4422 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:21 /usr/share/ca-certificates/17012.pem
	I0731 12:01:12.763841    4422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17012.pem
	I0731 12:01:12.765644    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17012.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:01:12.768712    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:01:12.771679    4422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:01:12.773190    4422 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:01:12.773209    4422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:01:12.775061    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:01:12.778345    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1701.pem && ln -fs /usr/share/ca-certificates/1701.pem /etc/ssl/certs/1701.pem"
	I0731 12:01:12.781918    4422 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1701.pem
	I0731 12:01:12.783543    4422 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:21 /usr/share/ca-certificates/1701.pem
	I0731 12:01:12.783566    4422 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1701.pem
	I0731 12:01:12.785398    4422 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1701.pem /etc/ssl/certs/51391683.0"
	I0731 12:01:12.788642    4422 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:01:12.790247    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:01:12.792427    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:01:12.794416    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:01:12.796293    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:01:12.798483    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:01:12.800358    4422 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:01:12.802111    4422 kubeadm.go:392] StartCluster: {Name:running-upgrade-334000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50281 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-334000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:01:12.802183    4422 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:01:12.812622    4422 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:01:12.816140    4422 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:01:12.816147    4422 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:01:12.816175    4422 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:01:12.819043    4422 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:01:12.819303    4422 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-334000" does not appear in /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:01:12.819352    4422 kubeconfig.go:62] /Users/jenkins/minikube-integration/19356-1202/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-334000" cluster setting kubeconfig missing "running-upgrade-334000" context setting]
	I0731 12:01:12.819498    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:01:12.820976    4422 kapi.go:59] client config for running-upgrade-334000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046b01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:01:12.821315    4422 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:01:12.824270    4422 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-334000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:01:12.824276    4422 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:01:12.824315    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:01:12.836297    4422 docker.go:483] Stopping containers: [48376bbb6d8a d2c3867e75f2 5e37db09059b da611b7714e6 ffb0becc47c2 e06e53cf80e7 5e670ba0c351 ac3b8b15c1ab 9a56e259f1dd 213d1185b24d dd23bbfc4157 51b64b7e1919 b5b2cdafde0e]
	I0731 12:01:12.836368    4422 ssh_runner.go:195] Run: docker stop 48376bbb6d8a d2c3867e75f2 5e37db09059b da611b7714e6 ffb0becc47c2 e06e53cf80e7 5e670ba0c351 ac3b8b15c1ab 9a56e259f1dd 213d1185b24d dd23bbfc4157 51b64b7e1919 b5b2cdafde0e
	I0731 12:01:12.847546    4422 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:01:12.958123    4422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:01:12.962901    4422 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 31 19:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 31 19:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 31 19:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jul 31 19:00 /etc/kubernetes/scheduler.conf
	
	I0731 12:01:12.962938    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0731 12:01:12.966653    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:01:12.966687    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:01:12.970050    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0731 12:01:12.973683    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:01:12.973701    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:01:12.977427    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0731 12:01:12.980764    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:01:12.980788    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:01:12.983729    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0731 12:01:12.986396    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:01:12.986417    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:01:12.989192    4422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:01:12.991966    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:01:13.012390    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:01:13.742622    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:01:13.943385    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:01:13.969764    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:01:13.989576    4422 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:01:13.989654    4422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:01:14.491860    4422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:01:14.991706    4422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:01:14.996230    4422 api_server.go:72] duration metric: took 1.006677709s to wait for apiserver process to appear ...
	I0731 12:01:14.996240    4422 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:01:14.996253    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:19.998381    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:19.998469    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:24.999374    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:24.999445    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:30.000308    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:30.000387    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:35.001287    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:35.001401    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:40.002934    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:40.003055    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:45.005064    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:45.005149    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:50.007776    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:50.007865    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:01:55.009234    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:01:55.009317    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:00.011873    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:00.011963    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:05.014560    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:05.014643    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:10.017262    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:10.017356    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:15.019938    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:15.020269    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:15.061901    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:15.062045    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:15.083551    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:15.083671    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:15.103767    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:15.103847    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:15.116371    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:15.116446    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:15.127951    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:15.128032    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:15.138712    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:15.138785    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:15.149702    4422 logs.go:276] 0 containers: []
	W0731 12:02:15.149713    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:15.149787    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:15.160800    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:15.160824    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:15.160829    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:15.172987    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:15.172999    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:15.184709    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:15.184720    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:15.196632    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:15.196646    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:15.201743    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:15.201753    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:15.242703    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:15.242715    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:15.258106    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:15.258116    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:15.332209    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:15.332222    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:15.350435    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:15.350445    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:15.362437    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:15.362449    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:15.402310    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:15.402320    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:15.413081    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:15.413092    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:15.428646    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:15.428656    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:15.454049    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:15.454056    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:15.466317    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:15.466327    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:15.480282    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:15.480293    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:15.497789    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:15.497800    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:18.011792    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:23.014532    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:23.014939    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:23.056095    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:23.056233    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:23.078712    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:23.078828    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:23.094825    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:23.094894    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:23.108858    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:23.108937    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:23.124224    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:23.124301    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:23.135440    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:23.135511    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:23.145767    4422 logs.go:276] 0 containers: []
	W0731 12:02:23.145779    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:23.145837    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:23.157033    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:23.157053    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:23.157060    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:23.200069    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:23.200080    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:23.215554    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:23.215567    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:23.227479    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:23.227491    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:23.244362    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:23.244374    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:23.255713    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:23.255727    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:23.267581    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:23.267601    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:23.294113    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:23.294120    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:23.335355    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:23.335368    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:23.349754    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:23.349767    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:23.361579    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:23.361593    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:23.377406    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:23.377418    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:23.389037    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:23.389048    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:23.400735    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:23.400756    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:23.419635    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:23.419644    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:23.457665    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:23.457672    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:23.462058    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:23.462066    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:25.979085    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:30.979469    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:30.979678    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:30.992990    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:30.993072    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:31.003992    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:31.004056    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:31.014539    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:31.014608    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:31.024971    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:31.025045    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:31.035366    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:31.035424    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:31.046042    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:31.046106    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:31.056054    4422 logs.go:276] 0 containers: []
	W0731 12:02:31.056064    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:31.056111    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:31.066573    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:31.066590    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:31.066595    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:31.078280    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:31.078293    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:31.089284    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:31.089296    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:31.105802    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:31.105815    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:31.148893    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:31.148906    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:31.162793    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:31.162805    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:31.176926    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:31.176936    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:31.181824    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:31.181833    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:31.193181    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:31.193193    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:31.219792    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:31.219802    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:31.260390    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:31.260402    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:31.272470    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:31.272482    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:31.283774    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:31.283787    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:31.300020    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:31.300033    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:31.341200    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:31.341210    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:31.356066    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:31.356078    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:31.373200    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:31.373210    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:33.891483    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:38.904729    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:38.905147    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:38.956423    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:38.956581    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:38.988509    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:38.988587    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:39.004631    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:39.004695    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:39.015561    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:39.015673    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:39.030723    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:39.030791    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:39.041381    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:39.041457    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:39.051471    4422 logs.go:276] 0 containers: []
	W0731 12:02:39.051483    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:39.051544    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:39.061839    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:39.061863    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:39.061871    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:39.073534    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:39.073545    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:39.097893    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:39.097901    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:39.109248    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:39.109258    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:39.151185    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:39.151197    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:39.165826    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:39.165837    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:39.177669    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:39.177678    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:39.194723    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:39.194733    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:39.199572    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:39.199579    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:39.217022    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:39.217031    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:39.233942    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:39.233952    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:39.249125    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:39.249144    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:39.260654    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:39.260665    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:39.272781    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:39.272790    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:39.314127    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:39.314132    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:39.349601    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:39.349611    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:39.361124    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:39.361139    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:41.881784    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:46.890696    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:46.891175    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:46.930952    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:46.931092    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:46.953840    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:46.953964    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:46.969600    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:46.969672    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:46.981934    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:46.982008    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:46.992526    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:46.992595    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:47.003493    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:47.003566    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:47.014025    4422 logs.go:276] 0 containers: []
	W0731 12:02:47.014037    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:47.014098    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:47.027820    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:47.027838    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:47.027843    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:47.046103    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:47.046115    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:47.072240    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:47.072250    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:47.083969    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:47.083981    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:47.124950    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:47.124963    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:47.160795    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:47.160809    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:47.174984    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:47.174995    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:47.186085    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:47.186096    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:47.197260    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:47.197270    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:47.201622    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:47.201631    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:47.239487    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:47.239498    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:47.253331    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:47.253345    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:47.264406    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:47.264417    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:47.276653    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:47.276665    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:47.291546    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:47.291558    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:47.303120    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:47.303133    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:47.317754    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:47.317764    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:49.833713    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:02:54.840160    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:02:54.840523    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:02:54.874711    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:02:54.874845    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:02:54.895041    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:02:54.895149    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:02:54.909203    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:02:54.909276    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:02:54.920757    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:02:54.920830    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:02:54.935564    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:02:54.935632    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:02:54.948267    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:02:54.948328    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:02:54.958086    4422 logs.go:276] 0 containers: []
	W0731 12:02:54.958098    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:02:54.958157    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:02:54.968643    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:02:54.968660    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:02:54.968665    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:02:54.982097    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:02:54.982107    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:02:54.993841    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:02:54.993855    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:02:55.005467    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:02:55.005479    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:02:55.021074    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:02:55.021085    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:02:55.032222    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:02:55.032237    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:02:55.044901    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:02:55.044914    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:02:55.084958    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:02:55.084967    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:02:55.120380    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:02:55.120390    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:02:55.160953    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:02:55.160965    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:02:55.174228    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:02:55.174242    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:02:55.198434    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:02:55.198441    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:02:55.209723    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:02:55.209733    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:02:55.226961    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:02:55.226971    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:02:55.238401    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:02:55.238414    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:02:55.242966    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:02:55.242974    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:02:55.256982    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:02:55.256994    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:02:57.775102    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:02.778769    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:02.779201    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:02.817804    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:02.817939    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:02.842635    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:02.842726    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:02.857385    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:02.857456    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:02.870294    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:02.870360    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:02.880766    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:02.880838    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:02.891261    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:02.891331    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:02.902186    4422 logs.go:276] 0 containers: []
	W0731 12:03:02.902198    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:02.902253    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:02.916070    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:02.916089    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:02.916095    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:02.953888    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:02.953898    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:02.969732    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:02.969744    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:02.981826    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:02.981838    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:02.986663    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:02.986669    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:03.000876    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:03.000887    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:03.012519    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:03.012530    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:03.024609    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:03.024623    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:03.036280    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:03.036295    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:03.049371    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:03.049384    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:03.066592    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:03.066605    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:03.106832    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:03.106840    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:03.141654    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:03.141664    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:03.156749    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:03.156760    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:03.171604    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:03.171614    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:03.183268    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:03.183277    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:03.195270    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:03.195283    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:05.723838    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:10.727988    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:10.728331    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:10.769136    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:10.769257    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:10.791153    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:10.791261    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:10.806669    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:10.806737    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:10.824684    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:10.824751    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:10.835320    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:10.835387    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:10.846189    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:10.846256    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:10.856424    4422 logs.go:276] 0 containers: []
	W0731 12:03:10.856435    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:10.856492    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:10.867229    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:10.867246    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:10.867252    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:10.872038    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:10.872047    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:10.890403    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:10.890415    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:10.902034    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:10.902046    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:10.941447    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:10.941463    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:10.978752    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:10.978768    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:10.997386    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:10.997397    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:11.010133    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:11.010142    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:11.036130    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:11.036142    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:11.071766    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:11.071777    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:11.083808    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:11.083818    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:11.095739    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:11.095750    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:11.108028    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:11.108038    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:11.121754    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:11.121763    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:11.136548    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:11.136560    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:11.151529    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:11.151542    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:11.162840    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:11.162852    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:13.676522    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:18.679655    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:18.679936    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:18.707142    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:18.707212    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:18.718230    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:18.718294    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:18.729194    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:18.729265    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:18.741126    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:18.741197    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:18.754024    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:18.754088    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:18.765304    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:18.765377    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:18.776444    4422 logs.go:276] 0 containers: []
	W0731 12:03:18.776457    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:18.776516    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:18.787690    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:18.787707    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:18.787713    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:18.802106    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:18.802120    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:18.814537    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:18.814547    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:18.826531    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:18.826542    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:18.831401    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:18.831408    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:18.867017    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:18.867031    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:18.882695    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:18.882704    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:18.894999    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:18.895010    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:18.919327    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:18.919334    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:18.956840    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:18.956850    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:18.971290    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:18.971299    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:19.012340    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:19.012360    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:19.024397    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:19.024414    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:19.040272    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:19.040286    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:19.057802    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:19.057816    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:19.069690    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:19.069700    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:19.081983    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:19.081995    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:21.597656    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:26.600796    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:26.600902    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:26.612386    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:26.612465    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:26.623115    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:26.623195    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:26.636985    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:26.637056    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:26.647547    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:26.647616    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:26.658038    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:26.658107    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:26.668775    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:26.668843    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:26.679191    4422 logs.go:276] 0 containers: []
	W0731 12:03:26.679202    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:26.679259    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:26.689926    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:26.689946    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:26.689952    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:26.703805    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:26.703818    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:26.719077    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:26.719086    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:26.731064    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:26.731075    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:26.757178    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:26.757188    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:26.771947    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:26.771958    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:26.785230    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:26.785241    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:26.796961    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:26.796972    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:26.836732    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:26.836739    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:26.841113    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:26.841121    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:26.854933    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:26.854942    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:26.866589    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:26.866599    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:26.878127    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:26.878138    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:26.890188    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:26.890200    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:26.927717    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:26.927729    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:26.965710    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:26.965721    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:26.982993    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:26.983006    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:29.496825    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:34.499302    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:34.499495    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:34.518883    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:34.518988    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:34.537052    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:34.537127    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:34.548330    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:34.548395    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:34.562836    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:34.562910    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:34.572827    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:34.572894    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:34.586633    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:34.586702    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:34.597568    4422 logs.go:276] 0 containers: []
	W0731 12:03:34.597580    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:34.597643    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:34.608152    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:34.608170    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:34.608175    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:34.646642    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:34.646653    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:34.657961    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:34.657974    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:34.669646    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:34.669659    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:34.695063    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:34.695072    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:34.735395    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:34.735402    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:34.772580    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:34.772592    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:34.786636    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:34.786647    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:34.804557    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:34.804570    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:34.816598    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:34.816609    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:34.827712    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:34.827723    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:34.842112    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:34.842128    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:34.846722    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:34.846727    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:34.861416    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:34.861427    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:34.876070    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:34.876082    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:34.890130    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:34.890142    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:34.901913    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:34.901923    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:37.414170    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:42.416846    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:42.416965    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:42.435689    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:42.435778    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:42.448311    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:42.448384    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:42.460415    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:42.460482    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:42.473910    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:42.473984    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:42.485887    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:42.485957    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:42.497747    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:42.497818    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:42.510364    4422 logs.go:276] 0 containers: []
	W0731 12:03:42.510378    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:42.510441    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:42.521909    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:42.521927    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:42.521933    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:42.567579    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:42.567594    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:42.607947    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:42.607969    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:42.620437    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:42.620450    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:42.624797    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:42.624805    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:42.638913    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:42.638927    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:42.653258    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:42.653268    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:42.678600    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:42.678607    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:42.691571    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:42.691581    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:42.732533    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:42.732541    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:42.744609    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:42.744618    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:42.756033    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:42.756045    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:42.777234    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:42.777246    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:42.788683    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:42.788695    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:42.803693    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:42.803703    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:42.815552    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:42.815565    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:42.830162    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:42.830171    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:45.350588    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:50.352964    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:50.353183    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:50.373965    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:50.374076    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:50.389339    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:50.389420    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:50.401567    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:50.401634    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:50.411938    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:50.412002    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:50.422057    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:50.422117    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:50.432831    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:50.432895    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:50.442744    4422 logs.go:276] 0 containers: []
	W0731 12:03:50.442756    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:50.442811    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:50.453037    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:50.453053    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:50.453058    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:50.467023    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:50.467034    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:50.504797    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:50.504807    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:50.516342    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:50.516353    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:03:50.527463    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:50.527472    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:50.539312    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:50.539325    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:50.577183    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:50.577193    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:50.591301    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:50.591315    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:50.605808    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:50.605821    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:50.622209    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:50.622222    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:50.640900    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:50.640910    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:50.658911    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:50.658923    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:50.670415    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:50.670425    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:50.695516    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:50.695525    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:50.735862    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:50.735871    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:50.740428    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:50.740438    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:50.754709    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:50.754724    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:53.268730    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:03:58.270948    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0731 12:03:58.271055    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:03:58.286944    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:03:58.287021    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:03:58.298251    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:03:58.298339    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:03:58.311155    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:03:58.311232    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:03:58.326680    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:03:58.326761    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:03:58.338401    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:03:58.338466    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:03:58.353804    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:03:58.353880    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:03:58.366332    4422 logs.go:276] 0 containers: []
	W0731 12:03:58.366342    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:03:58.366401    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:03:58.377019    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:03:58.377041    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:03:58.377047    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:03:58.391605    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:03:58.391616    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:03:58.404006    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:03:58.404019    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:03:58.422836    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:03:58.422850    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:03:58.438778    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:03:58.438788    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:03:58.464720    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:03:58.464730    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:03:58.485594    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:03:58.485612    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:03:58.503882    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:03:58.503893    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:03:58.519350    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:03:58.519361    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:03:58.532257    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:03:58.532269    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:03:58.550407    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:03:58.550418    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:03:58.562778    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:03:58.562789    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:03:58.601921    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:03:58.601934    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:03:58.606378    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:03:58.606387    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:03:58.640981    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:03:58.640992    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:03:58.681397    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:03:58.681410    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:03:58.693082    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:03:58.693093    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:01.206350    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:06.209018    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:06.209323    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:06.236397    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:06.236523    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:06.254333    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:06.254415    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:06.268976    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:06.269053    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:06.285246    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:06.285320    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:06.295470    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:06.295549    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:06.306240    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:06.306308    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:06.316039    4422 logs.go:276] 0 containers: []
	W0731 12:04:06.316051    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:06.316106    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:06.326710    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:06.326727    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:06.326735    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:06.364198    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:06.364209    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:06.381702    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:06.381715    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:06.395781    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:06.395792    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:06.435494    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:06.435504    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:06.440224    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:06.440231    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:06.457274    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:06.457285    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:06.468725    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:06.468736    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:06.492240    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:06.492246    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:06.506329    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:06.506339    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:06.518473    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:06.518482    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:06.529822    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:06.529833    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:06.545036    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:06.545050    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:06.557010    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:06.557022    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:06.568322    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:06.568333    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:06.580336    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:06.580347    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:06.616584    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:06.616595    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:09.133192    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:14.135788    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:14.136029    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:14.156116    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:14.156198    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:14.169618    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:14.169682    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:14.181007    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:14.181086    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:14.192771    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:14.192838    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:14.203150    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:14.203216    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:14.213735    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:14.213792    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:14.223883    4422 logs.go:276] 0 containers: []
	W0731 12:04:14.223894    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:14.223946    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:14.234068    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:14.234088    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:14.234093    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:14.245542    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:14.245555    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:14.256869    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:14.256880    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:14.272395    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:14.272408    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:14.287484    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:14.287497    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:14.329333    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:14.329343    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:14.343417    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:14.343431    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:14.357041    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:14.357052    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:14.371954    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:14.371964    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:14.396770    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:14.396783    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:14.410577    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:14.410587    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:14.421499    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:14.421510    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:14.438645    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:14.438656    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:14.457695    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:14.457707    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:14.498056    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:14.498069    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:14.502259    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:14.502267    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:14.539417    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:14.539427    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:17.055614    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:22.057820    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:22.057977    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:22.071309    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:22.071387    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:22.084194    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:22.084278    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:22.096511    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:22.096588    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:22.109000    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:22.109071    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:22.120923    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:22.121002    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:22.133126    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:22.133203    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:22.147157    4422 logs.go:276] 0 containers: []
	W0731 12:04:22.147168    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:22.147227    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:22.159452    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:22.159476    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:22.159483    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:22.185421    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:22.185461    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:22.227305    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:22.227324    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:22.243510    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:22.243524    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:22.256685    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:22.256697    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:22.273786    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:22.273798    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:22.286839    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:22.286850    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:22.335632    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:22.335653    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:22.353525    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:22.353538    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:22.374348    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:22.374364    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:22.380591    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:22.380604    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:22.395859    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:22.395871    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:22.408748    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:22.408758    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:22.420849    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:22.420862    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:22.433053    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:22.433064    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:22.470300    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:22.470315    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:22.483064    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:22.483076    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:24.997504    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:30.000239    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:30.000660    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:30.039607    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:30.039739    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:30.068459    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:30.068557    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:30.085134    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:30.085202    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:30.097663    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:30.097728    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:30.108713    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:30.108780    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:30.119424    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:30.119491    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:30.130012    4422 logs.go:276] 0 containers: []
	W0731 12:04:30.130025    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:30.130090    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:30.146045    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:30.146065    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:30.146070    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:30.184724    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:30.184735    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:30.196369    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:30.196380    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:30.208311    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:30.208323    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:30.242904    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:30.242917    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:30.259906    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:30.259918    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:30.271125    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:30.271137    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:30.288789    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:30.288801    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:30.302940    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:30.302952    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:30.317433    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:30.317446    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:30.334483    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:30.334493    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:30.345798    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:30.345808    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:30.350149    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:30.350159    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:30.388114    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:30.388124    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:30.399929    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:30.399940    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:30.411364    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:30.411376    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:30.422762    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:30.422775    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:32.949219    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:37.951390    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:37.951513    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:37.964046    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:37.964126    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:37.975378    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:37.975450    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:37.986268    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:37.986337    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:37.998363    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:37.998440    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:38.008914    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:38.008988    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:38.021501    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:38.021567    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:38.047335    4422 logs.go:276] 0 containers: []
	W0731 12:04:38.047383    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:38.047452    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:38.068832    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:38.068851    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:38.068857    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:38.109076    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:38.109090    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:38.124621    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:38.124630    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:38.163627    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:38.163639    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:38.175068    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:38.175079    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:38.192762    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:38.192773    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:38.216357    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:38.216364    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:38.230590    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:38.230603    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:38.244365    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:38.244374    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:38.258488    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:38.258502    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:38.270668    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:38.270680    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:38.282196    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:38.282212    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:38.317422    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:38.317434    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:38.330526    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:38.330541    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:38.342316    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:38.342331    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:38.353977    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:38.353991    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:38.366405    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:38.366421    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:40.873264    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:45.875385    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:45.875494    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:45.886598    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:45.886682    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:45.898711    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:45.898784    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:45.910149    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:45.910215    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:45.922288    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:45.922367    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:45.932890    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:45.932962    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:45.943715    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:45.943784    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:45.954402    4422 logs.go:276] 0 containers: []
	W0731 12:04:45.954414    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:45.954474    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:45.964522    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:45.964539    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:45.964544    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:45.979264    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:45.979275    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:45.991413    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:45.991424    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:46.014904    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:46.014912    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:46.057923    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:46.057941    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:46.071679    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:46.071689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:46.084639    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:46.084650    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:46.096469    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:46.096480    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:46.100891    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:46.100901    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:46.135985    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:46.135996    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:46.151665    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:46.151678    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:46.192504    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:46.192520    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:46.216781    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:46.216791    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:46.229240    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:46.229255    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:46.246067    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:46.246080    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:46.258857    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:46.258869    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:46.276802    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:46.276812    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:48.790536    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:53.793159    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:53.793406    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:53.821520    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:53.821638    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:53.837366    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:53.837451    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:53.850375    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:53.850438    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:53.861326    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:53.861398    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:53.871899    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:53.871957    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:53.882743    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:53.882818    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:53.893430    4422 logs.go:276] 0 containers: []
	W0731 12:04:53.893441    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:53.893492    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:53.903936    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:53.903957    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:53.903975    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:53.908582    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:53.908591    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:53.945965    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:53.945975    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:53.957505    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:53.957515    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:53.972591    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:53.972601    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:53.990036    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:53.990046    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:54.001175    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:54.001185    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:54.024735    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:54.024743    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:54.059124    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:54.059136    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:54.072844    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:54.072860    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:54.093842    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:54.093854    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:54.105609    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:54.105621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:54.120344    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:54.120354    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:54.131725    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:54.131736    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:54.143928    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:54.143943    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:54.184169    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:54.184187    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:54.199350    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:54.199361    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:56.713876    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:01.716415    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:01.716653    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:01.735586    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:05:01.735677    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:01.747949    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:05:01.748027    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:01.758677    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:05:01.758750    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:01.769629    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:05:01.769701    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:01.780481    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:05:01.780569    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:01.791006    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:05:01.791076    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:01.801717    4422 logs.go:276] 0 containers: []
	W0731 12:05:01.801728    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:01.801786    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:01.817820    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:05:01.817841    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:05:01.817846    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:05:01.833492    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:01.833503    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:01.838371    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:05:01.838385    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:05:01.852909    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:05:01.852920    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:05:01.864383    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:05:01.864396    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:05:01.879347    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:05:01.879358    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:05:01.891361    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:05:01.891374    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:05:01.906655    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:01.906666    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:01.948363    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:01.948371    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:01.982791    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:05:01.982801    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:05:01.994893    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:05:01.994908    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:05:02.009389    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:05:02.009402    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:05:02.027323    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:05:02.027334    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:05:02.040749    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:02.040764    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:02.063596    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:05:02.063607    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:05:02.100730    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:05:02.100744    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:05:02.115464    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:05:02.115475    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:04.629470    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:09.631749    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:09.631965    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:09.643229    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:05:09.643307    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:09.654605    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:05:09.654684    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:09.665898    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:05:09.665976    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:09.680905    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:05:09.680979    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:09.691224    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:05:09.691290    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:09.701604    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:05:09.701673    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:09.711440    4422 logs.go:276] 0 containers: []
	W0731 12:05:09.711453    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:09.711505    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:09.721949    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:05:09.721968    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:05:09.721973    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:05:09.736228    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:09.736240    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:09.760043    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:05:09.760052    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:05:09.774689    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:05:09.774703    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:05:09.786258    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:05:09.786274    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:05:09.797797    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:05:09.797807    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:05:09.812730    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:09.812741    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:09.817214    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:09.817223    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:09.851241    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:05:09.851254    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:05:09.889947    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:05:09.889959    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:05:09.903630    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:05:09.903644    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:05:09.921672    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:05:09.921683    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:05:09.933134    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:09.933143    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:09.973278    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:05:09.973286    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:05:09.984710    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:05:09.984720    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:05:09.995956    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:05:09.995968    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:05:10.007059    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:05:10.007072    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:12.521079    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:17.523365    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:17.523488    4422 kubeadm.go:597] duration metric: took 4m4.669988625s to restartPrimaryControlPlane
	W0731 12:05:17.523621    4422 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:05:17.523677    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:05:18.512727    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:05:18.517720    4422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:05:18.520565    4422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:05:18.523230    4422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:05:18.523236    4422 kubeadm.go:157] found existing configuration files:
	
	I0731 12:05:18.523260    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0731 12:05:18.526360    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:05:18.526382    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:05:18.529399    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0731 12:05:18.531944    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:05:18.531967    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:05:18.534910    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0731 12:05:18.537898    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:05:18.537921    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:05:18.540428    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0731 12:05:18.543060    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:05:18.543082    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:05:18.546182    4422 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:05:18.565946    4422 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:05:18.565974    4422 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:05:18.611886    4422 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:05:18.611975    4422 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:05:18.612091    4422 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:05:18.664013    4422 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:05:18.667685    4422 out.go:204]   - Generating certificates and keys ...
	I0731 12:05:18.667718    4422 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:05:18.667764    4422 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:05:18.667810    4422 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:05:18.667842    4422 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:05:18.667885    4422 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:05:18.667923    4422 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:05:18.667963    4422 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:05:18.667996    4422 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:05:18.668071    4422 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:05:18.668112    4422 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:05:18.668132    4422 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:05:18.668161    4422 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:05:18.691319    4422 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:05:18.756627    4422 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:05:18.793635    4422 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:05:18.998716    4422 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:05:19.034447    4422 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:05:19.034815    4422 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:05:19.034843    4422 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:05:19.128211    4422 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:05:19.131413    4422 out.go:204]   - Booting up control plane ...
	I0731 12:05:19.131463    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:05:19.131503    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:05:19.131537    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:05:19.131587    4422 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:05:19.131661    4422 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:05:23.132985    4422 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002179 seconds
	I0731 12:05:23.133049    4422 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:05:23.136226    4422 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:05:23.652700    4422 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:05:23.652993    4422 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-334000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:05:24.156407    4422 kubeadm.go:310] [bootstrap-token] Using token: jndj12.muje0ideebh5ulzd
	I0731 12:05:24.162145    4422 out.go:204]   - Configuring RBAC rules ...
	I0731 12:05:24.162211    4422 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:05:24.162255    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:05:24.168877    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:05:24.169705    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:05:24.170574    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:05:24.171577    4422 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:05:24.174548    4422 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:05:24.363498    4422 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:05:24.560761    4422 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:05:24.561242    4422 kubeadm.go:310] 
	I0731 12:05:24.561279    4422 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:05:24.561284    4422 kubeadm.go:310] 
	I0731 12:05:24.561322    4422 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:05:24.561325    4422 kubeadm.go:310] 
	I0731 12:05:24.561337    4422 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:05:24.561364    4422 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:05:24.561394    4422 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:05:24.561399    4422 kubeadm.go:310] 
	I0731 12:05:24.561433    4422 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:05:24.561440    4422 kubeadm.go:310] 
	I0731 12:05:24.561465    4422 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:05:24.561468    4422 kubeadm.go:310] 
	I0731 12:05:24.561497    4422 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:05:24.561543    4422 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:05:24.561590    4422 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:05:24.561593    4422 kubeadm.go:310] 
	I0731 12:05:24.561641    4422 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:05:24.561683    4422 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:05:24.561687    4422 kubeadm.go:310] 
	I0731 12:05:24.561726    4422 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jndj12.muje0ideebh5ulzd \
	I0731 12:05:24.561780    4422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d \
	I0731 12:05:24.561795    4422 kubeadm.go:310] 	--control-plane 
	I0731 12:05:24.561800    4422 kubeadm.go:310] 
	I0731 12:05:24.561842    4422 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:05:24.561845    4422 kubeadm.go:310] 
	I0731 12:05:24.561883    4422 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jndj12.muje0ideebh5ulzd \
	I0731 12:05:24.561940    4422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d 
	I0731 12:05:24.562001    4422 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:05:24.562010    4422 cni.go:84] Creating CNI manager for ""
	I0731 12:05:24.562019    4422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:05:24.569390    4422 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:05:24.573515    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:05:24.576540    4422 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:05:24.581597    4422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:05:24.581636    4422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:05:24.581659    4422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-334000 minikube.k8s.io/updated_at=2024_07_31T12_05_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=running-upgrade-334000 minikube.k8s.io/primary=true
	I0731 12:05:24.626821    4422 kubeadm.go:1113] duration metric: took 45.218583ms to wait for elevateKubeSystemPrivileges
	I0731 12:05:24.626839    4422 ops.go:34] apiserver oom_adj: -16
	I0731 12:05:24.626843    4422 kubeadm.go:394] duration metric: took 4m11.787507792s to StartCluster
	I0731 12:05:24.626853    4422 settings.go:142] acquiring lock: {Name:mk8345ab3fe8ab5ac7063435ec374691aa431221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:05:24.626944    4422 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:05:24.627321    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:05:24.627521    4422 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:05:24.627585    4422 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:05:24.627616    4422 config.go:182] Loaded profile config "running-upgrade-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:05:24.627622    4422 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-334000"
	I0731 12:05:24.627679    4422 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-334000"
	I0731 12:05:24.627647    4422 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-334000"
	W0731 12:05:24.627684    4422 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:05:24.627697    4422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-334000"
	I0731 12:05:24.627711    4422 host.go:66] Checking if "running-upgrade-334000" exists ...
	I0731 12:05:24.627968    4422 retry.go:31] will retry after 504.651896ms: connect: dial unix /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/monitor: connect: connection refused
	I0731 12:05:24.628595    4422 kapi.go:59] client config for running-upgrade-334000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046b01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:05:24.628722    4422 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-334000"
	W0731 12:05:24.628726    4422 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:05:24.628732    4422 host.go:66] Checking if "running-upgrade-334000" exists ...
	I0731 12:05:24.629251    4422 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:05:24.629255    4422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:05:24.629260    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:05:24.631257    4422 out.go:177] * Verifying Kubernetes components...
	I0731 12:05:24.639354    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:05:24.723945    4422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:05:24.729487    4422 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:05:24.729526    4422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:05:24.731160    4422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:05:24.735827    4422 api_server.go:72] duration metric: took 108.297375ms to wait for apiserver process to appear ...
	I0731 12:05:24.735835    4422 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:05:24.735841    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:25.139303    4422 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:05:25.143302    4422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:05:25.143309    4422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:05:25.143316    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:05:25.177262    4422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:05:29.737945    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:29.737992    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:34.738306    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:34.738345    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:39.738682    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:39.738702    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:44.739201    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:44.739228    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:49.739823    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:49.739883    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:54.740754    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:54.740800    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:05:55.047805    4422 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:05:55.051119    4422 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:05:55.063004    4422 addons.go:510] duration metric: took 30.435949708s for enable addons: enabled=[storage-provisioner]
	I0731 12:05:59.741832    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:59.741863    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:04.743194    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:04.743219    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:09.744955    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:09.745021    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:14.747370    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:14.747414    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:19.749630    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:19.749687    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:24.751966    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:24.752174    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:24.768429    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:24.768517    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:24.780932    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:24.781013    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:24.791899    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:24.791972    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:24.802424    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:24.802494    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:24.833170    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:24.833247    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:24.844683    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:24.844753    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:24.854949    4422 logs.go:276] 0 containers: []
	W0731 12:06:24.854965    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:24.855025    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:24.865370    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:24.865387    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:24.865393    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:24.877388    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:24.877399    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:24.888909    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:24.888919    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:24.902273    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:24.902284    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:24.938470    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:24.938481    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:24.976139    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:24.976153    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:24.993833    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:24.993843    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:25.005408    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:25.005418    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:25.020129    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:25.020142    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:25.024615    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:25.024624    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:25.041743    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:25.041755    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:25.053524    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:25.053535    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:25.070813    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:25.070823    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:27.597531    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:32.599944    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:32.600070    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:32.610965    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:32.611044    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:32.622060    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:32.622137    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:32.633203    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:32.633274    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:32.643847    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:32.643912    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:32.654516    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:32.654584    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:32.666607    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:32.666679    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:32.680630    4422 logs.go:276] 0 containers: []
	W0731 12:06:32.680642    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:32.680704    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:32.691138    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:32.691156    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:32.691161    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:32.705386    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:32.705397    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:32.719409    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:32.719420    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:32.730986    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:32.730996    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:32.748047    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:32.748058    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:32.772847    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:32.772855    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:32.785514    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:32.785525    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:32.790615    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:32.790624    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:32.825241    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:32.825251    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:32.840533    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:32.840546    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:32.852210    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:32.852221    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:32.863821    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:32.863830    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:32.875464    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:32.875474    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:35.410253    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:40.412982    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:40.413199    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:40.432008    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:40.432103    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:40.446805    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:40.446886    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:40.459438    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:40.459503    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:40.469948    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:40.470021    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:40.480765    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:40.480833    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:40.490918    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:40.490980    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:40.500627    4422 logs.go:276] 0 containers: []
	W0731 12:06:40.500638    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:40.500688    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:40.510842    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:40.510856    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:40.510862    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:40.546536    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:40.546546    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:40.551072    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:40.551078    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:40.585216    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:40.585232    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:40.599628    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:40.599639    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:40.614141    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:40.614152    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:40.625937    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:40.625947    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:40.637714    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:40.637724    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:40.653458    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:40.653468    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:40.665064    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:40.665075    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:40.676977    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:40.676987    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:40.693730    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:40.693739    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:40.705064    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:40.705074    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:43.230273    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:48.232741    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:48.233221    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:48.274772    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:48.274904    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:48.295691    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:48.295786    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:48.310940    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:48.311014    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:48.323684    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:48.323760    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:48.334677    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:48.334747    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:48.346112    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:48.346188    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:48.357266    4422 logs.go:276] 0 containers: []
	W0731 12:06:48.357277    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:48.357336    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:48.367494    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:48.367511    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:48.367516    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:48.402570    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:48.402582    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:48.416561    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:48.416572    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:48.428298    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:48.428309    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:48.445313    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:48.445328    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:48.471277    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:48.471290    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:48.482532    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:48.482546    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:48.516116    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:48.516127    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:48.530208    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:48.530220    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:48.542078    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:48.542092    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:48.556736    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:48.556747    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:48.573841    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:48.573851    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:48.585142    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:48.585155    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:51.091637    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:56.093889    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:56.094242    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:56.135680    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:56.135828    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:56.156830    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:56.156929    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:56.172036    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:56.172119    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:56.184585    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:56.184655    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:56.195422    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:56.195487    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:56.205802    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:56.205876    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:56.224877    4422 logs.go:276] 0 containers: []
	W0731 12:06:56.224889    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:56.224952    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:56.235392    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:56.235411    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:56.235416    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:56.270733    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:56.270745    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:56.275120    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:56.275128    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:56.289368    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:56.289379    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:56.301357    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:56.301371    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:56.313276    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:56.313289    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:56.338207    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:56.338217    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:56.349487    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:56.349501    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:56.384613    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:56.384626    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:56.398678    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:56.398689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:56.410380    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:56.410391    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:56.422170    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:56.422183    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:56.440005    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:56.440016    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:58.962691    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:03.965522    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:03.965828    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:04.003994    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:04.004119    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:04.024403    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:04.024490    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:04.039586    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:04.039665    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:04.051475    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:04.051540    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:04.065335    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:04.065407    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:04.076529    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:04.076603    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:04.091516    4422 logs.go:276] 0 containers: []
	W0731 12:07:04.091530    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:04.091591    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:04.102240    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:04.102255    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:04.102261    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:04.115072    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:04.115082    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:04.149151    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:04.149162    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:04.164344    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:04.164357    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:04.180178    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:04.180190    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:04.191969    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:04.191980    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:04.206641    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:04.206653    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:04.224724    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:04.224735    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:04.229326    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:04.229334    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:04.266717    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:04.266729    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:04.281149    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:04.281162    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:04.293774    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:04.293791    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:04.305169    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:04.305179    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:06.832728    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:11.835351    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:11.835707    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:11.873402    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:11.873542    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:11.895287    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:11.895375    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:11.910229    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:11.910292    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:11.922732    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:11.922792    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:11.934349    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:11.934421    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:11.945363    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:11.945423    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:11.956270    4422 logs.go:276] 0 containers: []
	W0731 12:07:11.956282    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:11.956346    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:11.968612    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:11.968626    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:11.968633    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:12.004406    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:12.004414    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:12.017023    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:12.017034    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:12.032960    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:12.032971    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:12.045233    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:12.045245    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:12.057595    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:12.057607    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:12.082491    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:12.082501    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:12.086734    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:12.086742    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:12.126235    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:12.126246    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:12.141035    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:12.141046    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:12.155058    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:12.155070    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:12.167407    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:12.167418    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:12.185722    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:12.185733    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:14.699944    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:19.702166    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:19.702433    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:19.725749    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:19.725859    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:19.741296    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:19.741370    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:19.753979    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:19.754054    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:19.766653    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:19.766720    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:19.777458    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:19.777525    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:19.788037    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:19.788113    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:19.798215    4422 logs.go:276] 0 containers: []
	W0731 12:07:19.798228    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:19.798286    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:19.809015    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:19.809032    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:19.809039    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:19.823417    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:19.823428    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:19.839561    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:19.839572    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:19.864373    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:19.864382    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:19.878749    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:19.878762    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:19.890444    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:19.890455    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:19.924844    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:19.924855    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:19.939426    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:19.939440    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:19.954562    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:19.954573    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:19.972062    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:19.972075    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:19.985234    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:19.985245    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:19.997286    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:19.997298    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:20.033258    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:20.033267    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:22.539817    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:27.542377    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:27.542538    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:27.559196    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:27.559286    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:27.572934    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:27.573007    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:27.583930    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:27.583998    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:27.595042    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:27.595115    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:27.606115    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:27.606186    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:27.616800    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:27.616865    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:27.627184    4422 logs.go:276] 0 containers: []
	W0731 12:07:27.627196    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:27.627257    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:27.637621    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:27.637635    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:27.637641    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:27.673945    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:27.673961    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:27.688544    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:27.688556    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:27.699741    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:27.699755    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:27.711218    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:27.711233    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:27.722454    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:27.722468    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:27.726952    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:27.726958    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:27.762037    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:27.762050    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:27.776028    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:27.776039    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:27.790882    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:27.790893    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:27.802884    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:27.802895    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:27.819871    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:27.819881    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:27.831505    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:27.831516    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:30.356466    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:35.357915    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:35.358277    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:35.394673    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:35.394801    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:35.413197    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:35.413295    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:35.427462    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:35.427540    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:35.439638    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:35.439711    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:35.452395    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:35.452467    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:35.467570    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:35.467632    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:35.478085    4422 logs.go:276] 0 containers: []
	W0731 12:07:35.478097    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:35.478155    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:35.489335    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:35.489351    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:35.489358    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:35.525012    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:35.525026    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:35.543054    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:35.543066    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:35.554640    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:35.554649    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:35.577747    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:35.577757    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:35.592762    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:35.592775    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:35.628188    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:35.628198    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:35.632704    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:35.632712    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:35.648181    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:35.648191    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:35.662549    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:35.662562    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:35.674918    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:35.674932    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:35.686435    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:35.686447    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:35.701328    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:35.701341    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:38.215047    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:43.216762    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:43.216978    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:43.241617    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:43.241709    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:43.255277    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:43.255345    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:43.270322    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:43.270384    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:43.280533    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:43.280593    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:43.291011    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:43.291069    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:43.301238    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:43.301298    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:43.311603    4422 logs.go:276] 0 containers: []
	W0731 12:07:43.311612    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:43.311664    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:43.321887    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:43.321907    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:43.321912    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:43.333227    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:43.333241    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:43.345187    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:43.345198    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:43.357696    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:43.357707    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:43.362083    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:43.362090    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:43.397507    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:43.397518    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:43.419286    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:43.419298    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:43.443461    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:43.443475    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:43.454731    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:43.454743    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:43.466245    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:43.466255    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:43.483667    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:43.483677    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:43.495238    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:43.495249    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:43.513551    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:43.513561    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:43.528195    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:43.528205    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:43.562316    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:43.562326    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:46.082155    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:51.084487    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:51.084688    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:51.103990    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:51.104074    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:51.117913    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:51.117979    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:51.129050    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:51.129116    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:51.140699    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:51.140768    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:51.151233    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:51.151296    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:51.162247    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:51.162316    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:51.172927    4422 logs.go:276] 0 containers: []
	W0731 12:07:51.172939    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:51.172998    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:51.183949    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:51.183967    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:51.183973    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:51.198956    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:51.198970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:51.211208    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:51.211218    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:51.215732    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:51.215741    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:51.230581    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:51.230592    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:51.242238    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:51.242249    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:51.254681    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:51.254692    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:51.290577    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:51.290587    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:51.304336    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:51.304347    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:51.316020    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:51.316031    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:51.327471    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:51.327482    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:51.339061    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:51.339071    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:51.372885    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:51.372894    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:51.384930    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:51.384942    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:51.410822    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:51.410834    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:53.931472    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:58.933217    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:58.933511    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:58.962369    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:58.962506    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:58.980872    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:58.980961    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:58.994648    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:58.994730    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:59.006102    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:59.006176    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:59.016592    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:59.016660    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:59.027228    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:59.027306    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:59.037658    4422 logs.go:276] 0 containers: []
	W0731 12:07:59.037672    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:59.037731    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:59.050225    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:59.050242    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:59.050247    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:59.085721    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:59.085732    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:59.099989    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:59.100001    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:59.117569    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:59.117579    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:59.132707    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:59.132718    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:59.144925    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:59.144935    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:59.156611    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:59.156621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:59.170986    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:59.171001    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:59.182670    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:59.182681    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:59.200385    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:59.200398    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:59.234053    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:59.234060    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:59.238324    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:59.238333    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:59.250042    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:59.250054    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:59.261459    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:59.261471    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:59.285611    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:59.285621    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:01.799274    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:06.801573    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:06.801919    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:06.841148    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:06.841282    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:06.860823    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:06.860926    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:06.875626    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:06.875709    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:06.887510    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:06.887582    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:06.898436    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:06.898512    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:06.909117    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:06.909184    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:06.924361    4422 logs.go:276] 0 containers: []
	W0731 12:08:06.924374    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:06.924431    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:06.934751    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:06.934769    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:06.934774    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:06.975448    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:06.975462    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:06.990995    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:06.991007    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:07.004953    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:07.004964    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:07.017591    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:07.017602    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:07.029578    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:07.029589    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:07.064663    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:07.064669    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:07.079852    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:07.079864    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:07.097168    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:07.097179    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:07.109262    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:07.109271    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:07.113868    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:07.113877    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:07.125789    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:07.125799    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:07.138151    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:07.138165    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:07.153079    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:07.153089    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:07.171841    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:07.171854    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:09.701309    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:14.703654    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:14.703898    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:14.722859    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:14.722931    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:14.739410    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:14.739486    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:14.750008    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:14.750074    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:14.760813    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:14.760891    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:14.772375    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:14.772449    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:14.783274    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:14.783339    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:14.793793    4422 logs.go:276] 0 containers: []
	W0731 12:08:14.793805    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:14.793865    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:14.804295    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:14.804314    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:14.804320    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:14.840001    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:14.840011    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:14.844904    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:14.844914    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:14.883120    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:14.883131    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:14.895077    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:14.895087    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:14.910715    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:14.910727    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:14.928247    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:14.928258    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:14.940426    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:14.940438    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:14.959051    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:14.959061    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:14.970862    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:14.970874    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:14.986430    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:14.986440    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:14.997755    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:14.997764    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:15.023112    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:15.023119    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:15.037687    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:15.037702    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:15.049179    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:15.049191    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:17.562705    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:22.563262    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:22.563404    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:22.574987    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:22.575064    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:22.586713    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:22.586793    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:22.597265    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:22.597341    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:22.608344    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:22.608409    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:22.622607    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:22.622669    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:22.633395    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:22.633468    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:22.643784    4422 logs.go:276] 0 containers: []
	W0731 12:08:22.643797    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:22.643855    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:22.653850    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:22.653866    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:22.653870    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:22.665677    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:22.665686    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:22.670075    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:22.670081    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:22.684303    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:22.684313    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:22.696608    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:22.696620    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:22.709983    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:22.709995    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:22.723113    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:22.723125    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:22.761307    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:22.761321    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:22.773168    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:22.773180    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:22.788550    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:22.788560    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:22.800921    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:22.800932    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:22.816058    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:22.816069    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:22.832943    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:22.832953    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:22.857980    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:22.857990    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:22.902396    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:22.902405    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:25.418480    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:30.420754    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:30.420858    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:30.433572    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:30.433650    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:30.452571    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:30.452647    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:30.465182    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:30.465258    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:30.483737    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:30.483806    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:30.494491    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:30.494555    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:30.504960    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:30.505029    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:30.515342    4422 logs.go:276] 0 containers: []
	W0731 12:08:30.515354    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:30.515418    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:30.531151    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:30.531170    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:30.531175    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:30.555626    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:30.555633    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:30.569553    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:30.569563    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:30.585209    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:30.585219    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:30.596659    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:30.596671    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:30.610208    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:30.610222    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:30.646085    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:30.646099    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:30.650752    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:30.650759    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:30.662196    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:30.662207    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:30.674028    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:30.674040    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:30.686352    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:30.686365    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:30.697671    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:30.697687    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:30.715052    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:30.715062    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:30.750267    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:30.750276    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:30.761869    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:30.761880    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:33.278425    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:38.280684    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:38.280820    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:38.291520    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:38.291592    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:38.303754    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:38.303821    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:38.315796    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:38.315867    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:38.326383    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:38.326445    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:38.337399    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:38.337464    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:38.348568    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:38.348628    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:38.358470    4422 logs.go:276] 0 containers: []
	W0731 12:08:38.358480    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:38.358545    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:38.369714    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:38.369731    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:38.369737    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:38.374372    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:38.374379    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:38.389077    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:38.389089    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:38.400705    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:38.400715    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:38.435418    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:38.435430    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:38.450318    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:38.450330    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:38.464841    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:38.464854    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:38.477103    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:38.477115    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:38.494264    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:38.494276    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:38.505974    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:38.505986    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:38.517900    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:38.517911    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:38.532522    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:38.532535    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:38.568361    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:38.568375    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:38.582788    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:38.582798    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:38.606367    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:38.606375    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:41.120302    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:46.122636    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:46.122758    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:46.134232    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:46.134319    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:46.145511    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:46.145589    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:46.156777    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:46.156853    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:46.168000    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:46.168067    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:46.178678    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:46.178755    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:46.190168    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:46.190242    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:46.201656    4422 logs.go:276] 0 containers: []
	W0731 12:08:46.201668    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:46.201730    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:46.213287    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:46.213309    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:46.213316    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:46.226432    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:46.226444    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:46.262506    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:46.262526    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:46.267156    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:46.267165    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:46.301633    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:46.301647    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:46.314149    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:46.314161    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:46.338051    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:46.338061    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:46.352384    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:46.352399    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:46.366256    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:46.366267    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:46.384926    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:46.384940    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:46.397810    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:46.397821    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:46.413117    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:46.413131    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:46.428675    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:46.428689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:46.441083    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:46.441095    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:46.461776    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:46.461788    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:48.979964    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:53.980378    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:53.980537    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:53.993034    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:53.993101    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:54.004796    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:54.004862    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:54.022079    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:54.022152    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:54.035518    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:54.035587    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:54.047792    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:54.047864    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:54.059073    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:54.059143    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:54.069719    4422 logs.go:276] 0 containers: []
	W0731 12:08:54.069734    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:54.069794    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:54.081489    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:54.081506    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:54.081513    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:54.094240    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:54.094250    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:54.107449    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:54.107468    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:54.112494    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:54.112507    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:54.128844    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:54.128860    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:54.142897    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:54.142909    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:54.159218    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:54.159236    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:54.173220    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:54.173235    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:54.188968    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:54.188980    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:54.213976    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:54.213986    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:54.226131    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:54.226141    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:54.265042    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:54.265054    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:54.277797    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:54.277809    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:54.296141    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:54.296152    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:54.308019    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:54.308031    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:56.843830    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:01.846015    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:01.846176    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:01.858607    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:01.858680    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:01.869867    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:01.869938    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:01.880677    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:01.880746    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:01.896646    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:01.896707    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:01.907159    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:01.907235    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:01.917457    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:01.917531    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:01.927929    4422 logs.go:276] 0 containers: []
	W0731 12:09:01.927940    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:01.927997    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:01.939038    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:01.939057    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:01.939063    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:01.944152    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:01.944167    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:01.958572    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:01.958583    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:01.970830    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:01.970841    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:01.983303    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:01.983316    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:02.006475    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:02.006482    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:02.040221    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:02.040241    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:02.075564    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:02.075582    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:02.088044    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:02.088055    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:02.103389    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:02.103398    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:02.117780    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:02.117790    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:02.129465    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:02.129476    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:02.141519    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:02.141530    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:02.178585    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:02.178597    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:02.195128    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:02.195142    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:04.709314    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:09.710422    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:09.710586    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:09.725732    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:09.725812    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:09.743555    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:09.743622    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:09.754574    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:09.754647    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:09.764722    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:09.764788    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:09.775185    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:09.775246    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:09.787850    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:09.787922    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:09.798015    4422 logs.go:276] 0 containers: []
	W0731 12:09:09.798029    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:09.798095    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:09.809062    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:09.809079    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:09.809084    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:09.820554    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:09.820563    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:09.825524    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:09.825531    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:09.840338    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:09.840349    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:09.855151    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:09.855160    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:09.867188    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:09.867199    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:09.885211    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:09.885223    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:09.901579    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:09.901590    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:09.936210    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:09.936224    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:09.948458    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:09.948467    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:09.960049    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:09.960059    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:09.983538    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:09.983546    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:10.000146    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:10.000158    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:10.011850    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:10.011863    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:10.045523    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:10.045532    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:12.561132    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:17.563393    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:17.563553    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:17.575233    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:17.575295    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:17.586035    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:17.586114    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:17.596675    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:17.596736    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:17.607851    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:17.607921    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:17.618251    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:17.618316    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:17.628825    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:17.628884    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:17.639426    4422 logs.go:276] 0 containers: []
	W0731 12:09:17.639438    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:17.639499    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:17.652632    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:17.652649    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:17.652654    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:17.657395    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:17.657403    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:17.681893    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:17.681905    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:17.696567    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:17.696578    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:17.709102    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:17.709114    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:17.720888    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:17.720900    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:17.732946    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:17.732958    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:17.754100    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:17.754113    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:17.787205    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:17.787213    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:17.821339    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:17.821351    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:17.836981    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:17.836991    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:17.848609    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:17.848621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:17.863768    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:17.863777    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:17.877959    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:17.877970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:17.890960    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:17.890970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:20.406370    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:25.408777    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:25.416484    4422 out.go:177] 
	W0731 12:09:25.420434    4422 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:09:25.420469    4422 out.go:239] * 
	* 
	W0731 12:09:25.422321    4422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:09:25.432284    4422 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-334000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-31 12:09:25.541968 -0700 PDT m=+3325.342195460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-334000 -n running-upgrade-334000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-334000 -n running-upgrade-334000: exit status 2 (15.60206025s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-334000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-232000          | force-systemd-flag-232000 | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-715000              | force-systemd-env-715000  | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-715000           | force-systemd-env-715000  | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT | 31 Jul 24 11:59 PDT |
	| start   | -p docker-flags-519000                | docker-flags-519000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-232000             | force-systemd-flag-232000 | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-232000          | force-systemd-flag-232000 | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT | 31 Jul 24 11:59 PDT |
	| start   | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-519000 ssh               | docker-flags-519000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-519000 ssh               | docker-flags-519000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-519000                | docker-flags-519000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT | 31 Jul 24 11:59 PDT |
	| start   | -p cert-options-939000                | cert-options-939000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-939000 ssh               | cert-options-939000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-939000 -- sudo        | cert-options-939000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-939000                | cert-options-939000       | jenkins | v1.33.1 | 31 Jul 24 11:59 PDT | 31 Jul 24 11:59 PDT |
	| start   | -p running-upgrade-334000             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 11:59 PDT | 31 Jul 24 12:00 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-334000             | running-upgrade-334000    | jenkins | v1.33.1 | 31 Jul 24 12:00 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.1 | 31 Jul 24 12:02 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-447000             | cert-expiration-447000    | jenkins | v1.33.1 | 31 Jul 24 12:02 PDT | 31 Jul 24 12:02 PDT |
	| start   | -p kubernetes-upgrade-760000          | kubernetes-upgrade-760000 | jenkins | v1.33.1 | 31 Jul 24 12:02 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-760000          | kubernetes-upgrade-760000 | jenkins | v1.33.1 | 31 Jul 24 12:03 PDT | 31 Jul 24 12:03 PDT |
	| start   | -p kubernetes-upgrade-760000          | kubernetes-upgrade-760000 | jenkins | v1.33.1 | 31 Jul 24 12:03 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-760000          | kubernetes-upgrade-760000 | jenkins | v1.33.1 | 31 Jul 24 12:03 PDT | 31 Jul 24 12:03 PDT |
	| start   | -p stopped-upgrade-532000             | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:03 PDT | 31 Jul 24 12:03 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-532000 stop           | minikube                  | jenkins | v1.26.0 | 31 Jul 24 12:03 PDT | 31 Jul 24 12:04 PDT |
	| start   | -p stopped-upgrade-532000             | stopped-upgrade-532000    | jenkins | v1.33.1 | 31 Jul 24 12:04 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 12:04:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 12:04:11.608738    4588 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:04:11.608909    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:04:11.608914    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:04:11.608917    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:04:11.609077    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:04:11.610276    4588 out.go:298] Setting JSON to false
	I0731 12:04:11.631377    4588 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3820,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:04:11.631446    4588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:04:11.636064    4588 out.go:177] * [stopped-upgrade-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:04:11.643003    4588 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:04:11.643048    4588 notify.go:220] Checking for updates...
	I0731 12:04:11.651017    4588 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:04:11.654085    4588 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:04:11.657899    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:04:11.661018    4588 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:04:11.664068    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:04:11.665700    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:04:11.668962    4588 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:04:11.672045    4588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:04:11.673767    4588 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:04:11.681012    4588 start.go:297] selected driver: qemu2
	I0731 12:04:11.681017    4588 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:11.681067    4588 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:04:11.683382    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:04:11.683398    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:04:11.683424    4588 start.go:340] cluster config:
	{Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:11.683478    4588 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:04:11.692036    4588 out.go:177] * Starting "stopped-upgrade-532000" primary control-plane node in "stopped-upgrade-532000" cluster
	I0731 12:04:11.696014    4588 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:04:11.696027    4588 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:04:11.696036    4588 cache.go:56] Caching tarball of preloaded images
	I0731 12:04:11.696091    4588 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:04:11.696096    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:04:11.696137    4588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/config.json ...
	I0731 12:04:11.696629    4588 start.go:360] acquireMachinesLock for stopped-upgrade-532000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:04:11.696663    4588 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "stopped-upgrade-532000"
	I0731 12:04:11.696670    4588 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:04:11.696676    4588 fix.go:54] fixHost starting: 
	I0731 12:04:11.696777    4588 fix.go:112] recreateIfNeeded on stopped-upgrade-532000: state=Stopped err=<nil>
	W0731 12:04:11.696785    4588 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:04:11.700995    4588 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-532000" ...
	I0731 12:04:14.135788    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:14.136029    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:14.156116    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:14.156198    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:14.169618    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:14.169682    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:14.181007    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:14.181086    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:14.192771    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:14.192838    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:14.203150    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:14.203216    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:14.213735    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:14.213792    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:14.223883    4422 logs.go:276] 0 containers: []
	W0731 12:04:14.223894    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:14.223946    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:14.234068    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:14.234088    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:14.234093    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:14.245542    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:14.245555    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:14.256869    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:14.256880    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:14.272395    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:14.272408    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:14.287484    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:14.287497    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:14.329333    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:14.329343    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:14.343417    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:14.343431    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:14.357041    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:14.357052    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:14.371954    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:14.371964    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:14.396770    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:14.396783    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:14.410577    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:14.410587    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:14.421499    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:14.421510    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:14.438645    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:14.438656    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:14.457695    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:14.457707    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:14.498056    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:14.498069    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:14.502259    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:14.502267    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:14.539417    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:14.539427    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:11.708937    4588 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:04:11.709003    4588 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50472-:22,hostfwd=tcp::50473-:2376,hostname=stopped-upgrade-532000 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/disk.qcow2
	I0731 12:04:11.757586    4588 main.go:141] libmachine: STDOUT: 
	I0731 12:04:11.757632    4588 main.go:141] libmachine: STDERR: 
	I0731 12:04:11.757639    4588 main.go:141] libmachine: Waiting for VM to start (ssh -p 50472 docker@127.0.0.1)...
	I0731 12:04:17.055614    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:22.057820    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:22.057977    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:22.071309    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:22.071387    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:22.084194    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:22.084278    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:22.096511    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:22.096588    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:22.109000    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:22.109071    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:22.120923    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:22.121002    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:22.133126    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:22.133203    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:22.147157    4422 logs.go:276] 0 containers: []
	W0731 12:04:22.147168    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:22.147227    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:22.159452    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:22.159476    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:22.159483    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:22.185421    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:22.185461    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:22.227305    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:22.227324    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:22.243510    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:22.243524    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:22.256685    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:22.256697    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:22.273786    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:22.273798    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:22.286839    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:22.286850    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:22.335632    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:22.335653    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:22.353525    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:22.353538    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:22.374348    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:22.374364    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:22.380591    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:22.380604    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:22.395859    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:22.395871    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:22.408748    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:22.408758    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:22.420849    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:22.420862    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:22.433053    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:22.433064    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:22.470300    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:22.470315    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:22.483064    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:22.483076    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:24.997504    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:30.000239    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:30.000660    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:30.039607    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:30.039739    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:30.068459    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:30.068557    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:30.085134    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:30.085202    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:30.097663    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:30.097728    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:30.108713    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:30.108780    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:30.119424    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:30.119491    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:30.130012    4422 logs.go:276] 0 containers: []
	W0731 12:04:30.130025    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:30.130090    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:30.146045    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:30.146065    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:30.146070    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:30.184724    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:30.184735    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:30.196369    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:30.196380    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:30.208311    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:30.208323    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:30.242904    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:30.242917    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:30.259906    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:30.259918    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:30.271125    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:30.271137    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:30.288789    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:30.288801    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:30.302940    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:30.302952    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:30.317433    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:30.317446    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:30.334483    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:30.334493    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:30.345798    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:30.345808    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:30.350149    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:30.350159    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:30.388114    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:30.388124    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:30.399929    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:30.399940    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:30.411364    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:30.411376    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:30.422762    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:30.422775    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:32.149649    4588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/config.json ...
	I0731 12:04:32.150465    4588 machine.go:94] provisionDockerMachine start ...
	I0731 12:04:32.150652    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.150964    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.150976    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:04:32.237443    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:04:32.237473    4588 buildroot.go:166] provisioning hostname "stopped-upgrade-532000"
	I0731 12:04:32.237558    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.237803    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.237814    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-532000 && echo "stopped-upgrade-532000" | sudo tee /etc/hostname
	I0731 12:04:32.321278    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-532000
	
	I0731 12:04:32.321371    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.321563    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.321574    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-532000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-532000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-532000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:04:32.391604    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:04:32.391616    4588 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19356-1202/.minikube CaCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19356-1202/.minikube}
	I0731 12:04:32.391630    4588 buildroot.go:174] setting up certificates
	I0731 12:04:32.391636    4588 provision.go:84] configureAuth start
	I0731 12:04:32.391642    4588 provision.go:143] copyHostCerts
	I0731 12:04:32.391729    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem, removing ...
	I0731 12:04:32.391736    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem
	I0731 12:04:32.391842    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem (1123 bytes)
	I0731 12:04:32.392024    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem, removing ...
	I0731 12:04:32.392030    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem
	I0731 12:04:32.392086    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem (1679 bytes)
	I0731 12:04:32.392198    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem, removing ...
	I0731 12:04:32.392202    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem
	I0731 12:04:32.392254    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem (1082 bytes)
	I0731 12:04:32.392340    4588 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-532000 san=[127.0.0.1 localhost minikube stopped-upgrade-532000]
	I0731 12:04:32.513592    4588 provision.go:177] copyRemoteCerts
	I0731 12:04:32.513636    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:04:32.513644    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:32.550182    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:04:32.557109    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:04:32.563651    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:04:32.570855    4588 provision.go:87] duration metric: took 179.21775ms to configureAuth
	I0731 12:04:32.570865    4588 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:04:32.570969    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:04:32.571005    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.571096    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.571102    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:04:32.637396    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:04:32.637404    4588 buildroot.go:70] root file system type: tmpfs
	I0731 12:04:32.637471    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:04:32.637522    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.637633    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.637668    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:04:32.709924    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:04:32.709968    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.710083    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.710099    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:04:33.086234    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:04:33.086269    4588 machine.go:97] duration metric: took 935.808291ms to provisionDockerMachine
	I0731 12:04:33.086275    4588 start.go:293] postStartSetup for "stopped-upgrade-532000" (driver="qemu2")
	I0731 12:04:33.086282    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:04:33.086328    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:04:33.086338    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:33.122843    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:04:33.124388    4588 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:04:33.124395    4588 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/addons for local assets ...
	I0731 12:04:33.124486    4588 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/files for local assets ...
	I0731 12:04:33.124612    4588 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem -> 17012.pem in /etc/ssl/certs
	I0731 12:04:33.124745    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:04:33.127408    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:04:33.134674    4588 start.go:296] duration metric: took 48.394041ms for postStartSetup
	I0731 12:04:33.134686    4588 fix.go:56] duration metric: took 21.438302958s for fixHost
	I0731 12:04:33.134725    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:33.134822    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:33.134826    4588 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 12:04:33.200388    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722452673.609304963
	
	I0731 12:04:33.200401    4588 fix.go:216] guest clock: 1722452673.609304963
	I0731 12:04:33.200405    4588 fix.go:229] Guest: 2024-07-31 12:04:33.609304963 -0700 PDT Remote: 2024-07-31 12:04:33.134688 -0700 PDT m=+21.556032084 (delta=474.616963ms)
	I0731 12:04:33.200417    4588 fix.go:200] guest clock delta is within tolerance: 474.616963ms
	I0731 12:04:33.200420    4588 start.go:83] releasing machines lock for "stopped-upgrade-532000", held for 21.504044084s
	I0731 12:04:33.200493    4588 ssh_runner.go:195] Run: cat /version.json
	I0731 12:04:33.200495    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:04:33.200502    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:33.200515    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	W0731 12:04:33.201088    4588 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50594->127.0.0.1:50472: read: connection reset by peer
	I0731 12:04:33.201107    4588 retry.go:31] will retry after 343.569647ms: ssh: handshake failed: read tcp 127.0.0.1:50594->127.0.0.1:50472: read: connection reset by peer
	W0731 12:04:33.604639    4588 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:04:33.604862    4588 ssh_runner.go:195] Run: systemctl --version
	I0731 12:04:33.609384    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:04:33.613545    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:04:33.613627    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:04:33.619997    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:04:33.629189    4588 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:04:33.629204    4588 start.go:495] detecting cgroup driver to use...
	I0731 12:04:33.629357    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:04:33.640482    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:04:33.645171    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:04:33.649142    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:04:33.649189    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:04:33.652807    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:04:33.656367    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:04:33.660011    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:04:33.663471    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:04:33.667158    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:04:33.670524    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:04:33.673387    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:04:33.676210    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:04:33.679190    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:04:33.682084    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:33.762920    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:04:33.769442    4588 start.go:495] detecting cgroup driver to use...
	I0731 12:04:33.769510    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:04:33.775211    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:04:33.780076    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:04:33.787242    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:04:33.792216    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:04:33.796694    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:04:33.867067    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:04:33.872100    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:04:33.877273    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:04:33.878504    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:04:33.881205    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:04:33.886110    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:04:33.975722    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:04:34.037586    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:04:34.037650    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:04:34.042773    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:34.119900    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:04:35.272574    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152674042s)
	I0731 12:04:35.272628    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:04:35.277067    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:04:35.281583    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:04:35.360309    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:04:35.440470    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:35.525238    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:04:35.530911    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:04:35.535586    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:35.618495    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:04:35.657447    4588 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:04:35.657524    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:04:35.659500    4588 start.go:563] Will wait 60s for crictl version
	I0731 12:04:35.659553    4588 ssh_runner.go:195] Run: which crictl
	I0731 12:04:35.661162    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:04:35.675176    4588 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:04:35.675245    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:04:35.693787    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:04:32.949219    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:35.713518    4588 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:04:35.713644    4588 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:04:35.714839    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:04:35.718610    4588 kubeadm.go:883] updating cluster {Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:04:35.718651    4588 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:04:35.718692    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:04:35.729499    4588 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:04:35.729507    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:04:35.729552    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:04:35.732518    4588 ssh_runner.go:195] Run: which lz4
	I0731 12:04:35.733771    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 12:04:35.735066    4588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:04:35.735076    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:04:37.951390    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:37.951513    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:37.964046    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:37.964126    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:37.975378    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:37.975450    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:37.986268    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:37.986337    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:37.998363    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:37.998440    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:38.008914    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:38.008988    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:38.021501    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:38.021567    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:38.047335    4422 logs.go:276] 0 containers: []
	W0731 12:04:38.047383    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:38.047452    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:38.068832    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:38.068851    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:38.068857    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:38.109076    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:38.109090    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:38.124621    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:38.124630    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:38.163627    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:38.163639    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:38.175068    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:38.175079    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:38.192762    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:38.192773    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:38.216357    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:38.216364    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:38.230590    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:38.230603    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:38.244365    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:38.244374    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:38.258488    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:38.258502    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:38.270668    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:38.270680    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:38.282196    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:38.282212    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:38.317422    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:38.317434    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:38.330526    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:38.330541    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:38.342316    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:38.342331    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:38.353977    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:38.353991    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:38.366405    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:38.366421    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:40.873264    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:36.627794    4588 docker.go:649] duration metric: took 894.061917ms to copy over tarball
	I0731 12:04:36.627846    4588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:04:37.785663    4588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157820958s)
	I0731 12:04:37.785677    4588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:04:37.800881    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:04:37.803871    4588 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:04:37.809222    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:37.891213    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:04:39.413455    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.522248958s)
	I0731 12:04:39.413572    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:04:39.423995    4588 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:04:39.424006    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:04:39.424010    4588 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:04:39.429125    4588 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:39.431074    4588 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.432604    4588 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.432751    4588 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:39.434479    4588 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.434507    4588 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.436010    4588 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.436086    4588 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.437076    4588 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.437090    4588 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.438081    4588 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.438182    4588 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.438937    4588 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:04:39.439048    4588 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.439929    4588 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.440470    4588 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:04:39.880101    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.888410    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.891328    4588 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:04:39.891351    4588 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.891401    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.904392    4588 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:04:39.904403    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:04:39.904413    4588 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.904459    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.911716    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.915030    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:04:39.915889    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.920546    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:04:39.921977    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.928997    4588 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:04:39.929020    4588 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.929074    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0731 12:04:39.935980    4588 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:04:39.936145    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.941107    4588 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:04:39.941132    4588 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.941187    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.946905    4588 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:04:39.946925    4588 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.946976    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.948756    4588 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:04:39.948769    4588 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:04:39.948799    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:04:39.958374    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:04:39.970730    4588 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:04:39.970754    4588 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.970810    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.971753    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:04:39.971815    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:04:39.971859    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:04:39.977123    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:04:39.977239    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 12:04:39.983825    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:04:39.983839    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:04:39.983850    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:04:39.983850    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:04:39.984011    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:04:39.984094    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:04:39.986503    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:04:39.986516    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:04:40.010541    4588 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:04:40.010556    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0731 12:04:40.084781    4588 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:04:40.084889    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.085900    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:04:40.096417    4588 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:04:40.096488    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:04:40.127484    4588 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:04:40.127509    4588 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.127574    4588 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.221038    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:04:40.221106    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:04:40.221213    4588 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:04:40.233285    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:04:40.233315    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:04:40.299277    4588 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:04:40.299292    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:04:40.621357    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:04:40.621380    4588 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:04:40.621385    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:04:40.778715    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:04:40.778756    4588 cache_images.go:92] duration metric: took 1.354760125s to LoadCachedImages
	W0731 12:04:40.778800    4588 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:04:40.778805    4588 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:04:40.778867    4588 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-532000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:04:40.778936    4588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:04:40.792906    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:04:40.792917    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:04:40.792924    4588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:04:40.792933    4588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-532000 NodeName:stopped-upgrade-532000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:04:40.792993    4588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-532000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:04:40.793048    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:04:40.795811    4588 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:04:40.795834    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:04:40.798699    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:04:40.803615    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:04:40.808733    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:04:40.813836    4588 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:04:40.814976    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:04:40.818832    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:40.894784    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:04:40.901354    4588 certs.go:68] Setting up /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000 for IP: 10.0.2.15
	I0731 12:04:40.901368    4588 certs.go:194] generating shared ca certs ...
	I0731 12:04:40.901377    4588 certs.go:226] acquiring lock for ca certs: {Name:mkf42ffcc2bf4238c4563b7710ee6f745a9fc0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:40.901566    4588 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key
	I0731 12:04:40.901621    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key
	I0731 12:04:40.901628    4588 certs.go:256] generating profile certs ...
	I0731 12:04:40.901696    4588 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key
	I0731 12:04:40.901716    4588 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741
	I0731 12:04:40.901729    4588 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:04:41.091849    4588 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 ...
	I0731 12:04:41.091864    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741: {Name:mk4631f82fd7195a71dca1562372b13c69979a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.092158    4588 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741 ...
	I0731 12:04:41.092164    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741: {Name:mk8f0693cc4cbd008d7e5e97e68b7d08bcead493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.092309    4588 certs.go:381] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt
	I0731 12:04:41.092457    4588 certs.go:385] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key
	I0731 12:04:41.092623    4588 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.key
	I0731 12:04:41.092756    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem (1338 bytes)
	W0731 12:04:41.092788    4588 certs.go:480] ignoring /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701_empty.pem, impossibly tiny 0 bytes
	I0731 12:04:41.092795    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:04:41.092814    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:04:41.092832    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:04:41.092850    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem (1679 bytes)
	I0731 12:04:41.092887    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:04:41.093226    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:04:41.100213    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:04:41.107309    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:04:41.114561    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:04:41.122760    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:04:41.130321    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:04:41.137627    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:04:41.144539    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:04:41.151537    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:04:41.158806    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem --> /usr/share/ca-certificates/1701.pem (1338 bytes)
	I0731 12:04:41.166091    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /usr/share/ca-certificates/17012.pem (1708 bytes)
	I0731 12:04:41.172759    4588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:04:41.177684    4588 ssh_runner.go:195] Run: openssl version
	I0731 12:04:41.179486    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:04:41.182842    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.184405    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.184424    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.186223    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:04:41.189057    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1701.pem && ln -fs /usr/share/ca-certificates/1701.pem /etc/ssl/certs/1701.pem"
	I0731 12:04:41.191931    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.193434    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:21 /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.193455    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.195271    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1701.pem /etc/ssl/certs/51391683.0"
	I0731 12:04:41.198805    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17012.pem && ln -fs /usr/share/ca-certificates/17012.pem /etc/ssl/certs/17012.pem"
	I0731 12:04:41.201935    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.203262    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:21 /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.203278    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.205106    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17012.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:04:41.207959    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:04:41.209537    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:04:41.211426    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:04:41.213497    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:04:41.216114    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:04:41.218068    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:04:41.219889    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:04:41.221783    4588 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:41.221850    4588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:04:41.231790    4588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:04:41.235328    4588 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:04:41.235333    4588 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:04:41.235355    4588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:04:41.238573    4588 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:04:41.238868    4588 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-532000" does not appear in /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:04:41.238962    4588 kubeconfig.go:62] /Users/jenkins/minikube-integration/19356-1202/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-532000" cluster setting kubeconfig missing "stopped-upgrade-532000" context setting]
	I0731 12:04:41.239151    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.239573    4588 kapi.go:59] client config for stopped-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bd41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:04:41.239882    4588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:04:41.242602    4588 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-532000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:04:41.242607    4588 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:04:41.242651    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:04:41.253229    4588 docker.go:483] Stopping containers: [4723c1374220 b381bfc4361b ec713c22bdd4 96caf573f6dd 1afdb28c0dae 250b8cef76fb 0ee55201c776 e533f78e771c]
	I0731 12:04:41.253288    4588 ssh_runner.go:195] Run: docker stop 4723c1374220 b381bfc4361b ec713c22bdd4 96caf573f6dd 1afdb28c0dae 250b8cef76fb 0ee55201c776 e533f78e771c
	I0731 12:04:41.263990    4588 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:04:41.269496    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:04:41.272102    4588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:04:41.272107    4588 kubeadm.go:157] found existing configuration files:
	
	I0731 12:04:41.272125    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf
	I0731 12:04:41.275218    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:04:41.275239    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:04:41.278455    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf
	I0731 12:04:41.280959    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:04:41.280982    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:04:41.283713    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf
	I0731 12:04:41.286841    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:04:41.286864    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:04:41.289743    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf
	I0731 12:04:41.292205    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:04:41.292225    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:04:41.295279    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:04:41.298180    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:41.321436    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:45.875385    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:45.875494    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:45.886598    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:45.886682    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:45.898711    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:45.898784    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:45.910149    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:45.910215    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:45.922288    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:45.922367    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:45.932890    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:45.932962    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:45.943715    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:45.943784    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:45.954402    4422 logs.go:276] 0 containers: []
	W0731 12:04:45.954414    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:45.954474    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:45.964522    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:45.964539    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:45.964544    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:45.979264    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:45.979275    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:45.991413    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:45.991424    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:46.014904    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:46.014912    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:46.057923    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:46.057941    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:46.071679    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:46.071689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:46.084639    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:46.084650    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:46.096469    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:46.096480    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:46.100891    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:46.100901    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:46.135985    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:46.135996    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:46.151665    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:46.151678    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:46.192504    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:46.192520    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:46.216781    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:46.216791    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:46.229240    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:46.229255    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:46.246067    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:46.246080    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:46.258857    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:46.258869    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:46.276802    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:46.276812    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:41.848180    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:41.984866    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:42.007769    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:42.032822    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:04:42.032900    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:42.533159    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:43.034947    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:43.039284    4588 api_server.go:72] duration metric: took 1.006472792s to wait for apiserver process to appear ...
	I0731 12:04:43.039294    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:04:43.039302    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:48.790536    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:48.041339    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:48.041406    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:53.793159    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:53.793406    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:04:53.821520    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:04:53.821638    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:04:53.837366    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:04:53.837451    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:04:53.850375    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:04:53.850438    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:04:53.861326    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:04:53.861398    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:04:53.871899    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:04:53.871957    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:04:53.882743    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:04:53.882818    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:04:53.893430    4422 logs.go:276] 0 containers: []
	W0731 12:04:53.893441    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:04:53.893492    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:04:53.903936    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:04:53.903957    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:04:53.903975    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:04:53.908582    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:04:53.908591    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:04:53.945965    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:04:53.945975    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:04:53.957505    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:04:53.957515    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:04:53.972591    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:04:53.972601    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:04:53.990036    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:04:53.990046    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:04:54.001175    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:04:54.001185    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:04:54.024735    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:04:54.024743    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:04:54.059124    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:04:54.059136    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:04:54.072844    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:04:54.072860    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:04:54.093842    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:04:54.093854    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:04:54.105609    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:04:54.105621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:04:54.120344    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:04:54.120354    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:04:54.131725    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:04:54.131736    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:04:54.143928    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:04:54.143943    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:04:54.184169    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:04:54.184187    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:04:54.199350    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:04:54.199361    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:04:53.041693    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:53.041761    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:56.713876    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:58.042272    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:58.042294    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:01.716415    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:01.716653    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:01.735586    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:05:01.735677    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:01.747949    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:05:01.748027    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:01.758677    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:05:01.758750    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:01.769629    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:05:01.769701    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:01.780481    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:05:01.780569    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:01.791006    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:05:01.791076    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:01.801717    4422 logs.go:276] 0 containers: []
	W0731 12:05:01.801728    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:01.801786    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:01.817820    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:05:01.817841    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:05:01.817846    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:05:01.833492    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:01.833503    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:01.838371    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:05:01.838385    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:05:01.852909    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:05:01.852920    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:05:01.864383    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:05:01.864396    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:05:01.879347    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:05:01.879358    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:05:01.891361    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:05:01.891374    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:05:01.906655    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:01.906666    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:01.948363    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:01.948371    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:01.982791    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:05:01.982801    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:05:01.994893    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:05:01.994908    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:05:02.009389    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:05:02.009402    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:05:02.027323    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:05:02.027334    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:05:02.040749    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:02.040764    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:02.063596    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:05:02.063607    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:05:02.100730    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:05:02.100744    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:05:02.115464    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:05:02.115475    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:04.629470    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:03.042645    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:03.042739    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:09.631749    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:09.631965    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:09.643229    4422 logs.go:276] 2 containers: [4165c807807d 5e670ba0c351]
	I0731 12:05:09.643307    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:09.654605    4422 logs.go:276] 2 containers: [b68b404ce7f0 da611b7714e6]
	I0731 12:05:09.654684    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:09.665898    4422 logs.go:276] 1 containers: [4d28ce1cee9d]
	I0731 12:05:09.665976    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:09.680905    4422 logs.go:276] 2 containers: [ff6b99382c6e 9a56e259f1dd]
	I0731 12:05:09.680979    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:09.691224    4422 logs.go:276] 1 containers: [badac641ae8a]
	I0731 12:05:09.691290    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:09.701604    4422 logs.go:276] 2 containers: [bff68ab14310 5e37db09059b]
	I0731 12:05:09.701673    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:09.711440    4422 logs.go:276] 0 containers: []
	W0731 12:05:09.711453    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:09.711505    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:09.721949    4422 logs.go:276] 2 containers: [97b6dcc47e80 6cf0c9e93f46]
	I0731 12:05:09.721968    4422 logs.go:123] Gathering logs for kube-apiserver [4165c807807d] ...
	I0731 12:05:09.721973    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4165c807807d"
	I0731 12:05:09.736228    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:09.736240    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:09.760043    4422 logs.go:123] Gathering logs for etcd [da611b7714e6] ...
	I0731 12:05:09.760052    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da611b7714e6"
	I0731 12:05:09.774689    4422 logs.go:123] Gathering logs for coredns [4d28ce1cee9d] ...
	I0731 12:05:09.774703    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d28ce1cee9d"
	I0731 12:05:09.786258    4422 logs.go:123] Gathering logs for kube-scheduler [ff6b99382c6e] ...
	I0731 12:05:09.786274    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff6b99382c6e"
	I0731 12:05:09.797797    4422 logs.go:123] Gathering logs for kube-scheduler [9a56e259f1dd] ...
	I0731 12:05:09.797807    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a56e259f1dd"
	I0731 12:05:09.812730    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:09.812741    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:09.817214    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:09.817223    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:09.851241    4422 logs.go:123] Gathering logs for kube-apiserver [5e670ba0c351] ...
	I0731 12:05:09.851254    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e670ba0c351"
	I0731 12:05:09.889947    4422 logs.go:123] Gathering logs for etcd [b68b404ce7f0] ...
	I0731 12:05:09.889959    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68b404ce7f0"
	I0731 12:05:09.903630    4422 logs.go:123] Gathering logs for kube-controller-manager [bff68ab14310] ...
	I0731 12:05:09.903644    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bff68ab14310"
	I0731 12:05:09.921672    4422 logs.go:123] Gathering logs for storage-provisioner [97b6dcc47e80] ...
	I0731 12:05:09.921683    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97b6dcc47e80"
	I0731 12:05:09.933134    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:09.933143    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:09.973278    4422 logs.go:123] Gathering logs for kube-proxy [badac641ae8a] ...
	I0731 12:05:09.973286    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 badac641ae8a"
	I0731 12:05:09.984710    4422 logs.go:123] Gathering logs for kube-controller-manager [5e37db09059b] ...
	I0731 12:05:09.984720    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e37db09059b"
	I0731 12:05:09.995956    4422 logs.go:123] Gathering logs for storage-provisioner [6cf0c9e93f46] ...
	I0731 12:05:09.995968    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cf0c9e93f46"
	I0731 12:05:10.007059    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:05:10.007072    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:08.043548    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:08.043612    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:12.521079    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:13.044743    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:13.044798    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:17.523365    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:17.523488    4422 kubeadm.go:597] duration metric: took 4m4.669988625s to restartPrimaryControlPlane
	W0731 12:05:17.523621    4422 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:05:17.523677    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:05:18.512727    4422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:05:18.517720    4422 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:05:18.520565    4422 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:05:18.523230    4422 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:05:18.523236    4422 kubeadm.go:157] found existing configuration files:
	
	I0731 12:05:18.523260    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf
	I0731 12:05:18.526360    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:05:18.526382    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:05:18.529399    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf
	I0731 12:05:18.531944    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:05:18.531967    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:05:18.534910    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf
	I0731 12:05:18.537898    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:05:18.537921    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:05:18.540428    4422 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf
	I0731 12:05:18.543060    4422 kubeadm.go:163] "https://control-plane.minikube.internal:50281" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50281 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:05:18.543082    4422 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:05:18.546182    4422 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:05:18.565946    4422 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:05:18.565974    4422 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:05:18.611886    4422 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:05:18.611975    4422 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:05:18.612091    4422 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:05:18.664013    4422 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:05:18.667685    4422 out.go:204]   - Generating certificates and keys ...
	I0731 12:05:18.667718    4422 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:05:18.667764    4422 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:05:18.667810    4422 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:05:18.667842    4422 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:05:18.667885    4422 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:05:18.667923    4422 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:05:18.667963    4422 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:05:18.667996    4422 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:05:18.668071    4422 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:05:18.668112    4422 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:05:18.668132    4422 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:05:18.668161    4422 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:05:18.691319    4422 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:05:18.756627    4422 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:05:18.793635    4422 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:05:18.998716    4422 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:05:19.034447    4422 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:05:19.034815    4422 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:05:19.034843    4422 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:05:19.128211    4422 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:05:19.131413    4422 out.go:204]   - Booting up control plane ...
	I0731 12:05:19.131463    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:05:19.131503    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:05:19.131537    4422 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:05:19.131587    4422 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:05:19.131661    4422 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:05:18.046090    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:18.046114    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:23.132985    4422 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002179 seconds
	I0731 12:05:23.133049    4422 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:05:23.136226    4422 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:05:23.652700    4422 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:05:23.652993    4422 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-334000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:05:24.156407    4422 kubeadm.go:310] [bootstrap-token] Using token: jndj12.muje0ideebh5ulzd
	I0731 12:05:24.162145    4422 out.go:204]   - Configuring RBAC rules ...
	I0731 12:05:24.162211    4422 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:05:24.162255    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:05:24.168877    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:05:24.169705    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:05:24.170574    4422 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:05:24.171577    4422 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:05:24.174548    4422 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:05:24.363498    4422 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:05:24.560761    4422 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:05:24.561242    4422 kubeadm.go:310] 
	I0731 12:05:24.561279    4422 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:05:24.561284    4422 kubeadm.go:310] 
	I0731 12:05:24.561322    4422 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:05:24.561325    4422 kubeadm.go:310] 
	I0731 12:05:24.561337    4422 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:05:24.561364    4422 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:05:24.561394    4422 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:05:24.561399    4422 kubeadm.go:310] 
	I0731 12:05:24.561433    4422 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:05:24.561440    4422 kubeadm.go:310] 
	I0731 12:05:24.561465    4422 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:05:24.561468    4422 kubeadm.go:310] 
	I0731 12:05:24.561497    4422 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:05:24.561543    4422 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:05:24.561590    4422 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:05:24.561593    4422 kubeadm.go:310] 
	I0731 12:05:24.561641    4422 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:05:24.561683    4422 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:05:24.561687    4422 kubeadm.go:310] 
	I0731 12:05:24.561726    4422 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jndj12.muje0ideebh5ulzd \
	I0731 12:05:24.561780    4422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d \
	I0731 12:05:24.561795    4422 kubeadm.go:310] 	--control-plane 
	I0731 12:05:24.561800    4422 kubeadm.go:310] 
	I0731 12:05:24.561842    4422 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:05:24.561845    4422 kubeadm.go:310] 
	I0731 12:05:24.561883    4422 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jndj12.muje0ideebh5ulzd \
	I0731 12:05:24.561940    4422 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d 
	I0731 12:05:24.562001    4422 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:05:24.562010    4422 cni.go:84] Creating CNI manager for ""
	I0731 12:05:24.562019    4422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:05:24.569390    4422 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:05:24.573515    4422 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:05:24.576540    4422 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:05:24.581597    4422 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:05:24.581636    4422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:05:24.581659    4422 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-334000 minikube.k8s.io/updated_at=2024_07_31T12_05_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=running-upgrade-334000 minikube.k8s.io/primary=true
	I0731 12:05:24.626821    4422 kubeadm.go:1113] duration metric: took 45.218583ms to wait for elevateKubeSystemPrivileges
	I0731 12:05:24.626839    4422 ops.go:34] apiserver oom_adj: -16
	I0731 12:05:24.626843    4422 kubeadm.go:394] duration metric: took 4m11.787507792s to StartCluster
	I0731 12:05:24.626853    4422 settings.go:142] acquiring lock: {Name:mk8345ab3fe8ab5ac7063435ec374691aa431221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:05:24.626944    4422 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:05:24.627321    4422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:05:24.627521    4422 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:05:24.627585    4422 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:05:24.627616    4422 config.go:182] Loaded profile config "running-upgrade-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:05:24.627622    4422 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-334000"
	I0731 12:05:24.627679    4422 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-334000"
	I0731 12:05:24.627647    4422 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-334000"
	W0731 12:05:24.627684    4422 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:05:24.627697    4422 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-334000"
	I0731 12:05:24.627711    4422 host.go:66] Checking if "running-upgrade-334000" exists ...
	I0731 12:05:24.627968    4422 retry.go:31] will retry after 504.651896ms: connect: dial unix /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/monitor: connect: connection refused
	I0731 12:05:24.628595    4422 kapi.go:59] client config for running-upgrade-334000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/running-upgrade-334000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1046b01b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:05:24.628722    4422 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-334000"
	W0731 12:05:24.628726    4422 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:05:24.628732    4422 host.go:66] Checking if "running-upgrade-334000" exists ...
	I0731 12:05:24.629251    4422 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:05:24.629255    4422 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:05:24.629260    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:05:24.631257    4422 out.go:177] * Verifying Kubernetes components...
	I0731 12:05:24.639354    4422 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:05:24.723945    4422 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:05:24.729487    4422 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:05:24.729526    4422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:05:24.731160    4422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:05:24.735827    4422 api_server.go:72] duration metric: took 108.297375ms to wait for apiserver process to appear ...
	I0731 12:05:24.735835    4422 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:05:24.735841    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:25.139303    4422 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:05:25.143302    4422 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:05:25.143309    4422 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:05:25.143316    4422 sshutil.go:53] new ssh client: &{IP:localhost Port:50249 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/running-upgrade-334000/id_rsa Username:docker}
	I0731 12:05:25.177262    4422 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:05:23.047568    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:23.047589    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:29.737945    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:29.737992    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:28.048242    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:28.048340    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:34.738306    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:34.738345    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:33.050900    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:33.050933    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:39.738682    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:39.738702    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:38.053102    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:38.053149    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:44.739201    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:44.739228    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:43.055296    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:43.055468    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:43.071365    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:43.071445    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:43.083965    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:43.084043    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:43.099703    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:43.099787    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:43.110052    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:43.110125    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:43.120929    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:43.121002    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:43.133225    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:43.133306    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:43.148976    4588 logs.go:276] 0 containers: []
	W0731 12:05:43.148989    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:43.149045    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:43.159646    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:43.159674    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:43.159682    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:43.165264    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:43.165274    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:43.182996    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:43.183012    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:43.199114    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:43.199126    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:43.213130    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:43.213147    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:43.250383    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:43.250401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:43.264253    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:43.264268    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:43.276460    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:43.276469    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:43.287912    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:43.287926    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:43.311421    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:43.311428    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:43.394932    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:43.394944    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:05:43.438209    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:43.438230    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:43.450557    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:43.450569    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:43.467529    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:43.467539    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:43.478947    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:43.478958    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:43.493865    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:43.493875    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:43.505327    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:43.505339    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:46.019995    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:49.739823    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:49.739883    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:51.020985    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:51.021140    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:51.038434    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:51.038523    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:51.049350    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:51.049424    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:51.059852    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:51.059916    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:51.071010    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:51.071082    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:51.081380    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:51.081443    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:51.095615    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:51.095676    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:51.106053    4588 logs.go:276] 0 containers: []
	W0731 12:05:51.106064    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:51.106122    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:51.117736    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:51.117758    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:51.117763    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:51.130633    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:51.130646    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:51.142217    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:51.142228    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:05:51.181617    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:51.181633    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:51.197419    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:51.197432    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:51.217249    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:51.217261    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:51.230436    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:51.230450    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:51.244335    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:51.244345    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:51.263622    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:51.263636    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:51.288367    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:51.288376    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:51.300146    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:51.300157    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:51.304540    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:51.304547    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:51.341118    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:51.341130    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:51.360563    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:51.360574    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:51.375886    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:51.375897    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:51.387546    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:51.387559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:51.401703    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:51.401712    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:54.740754    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:54.740800    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:05:55.047805    4422 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:05:55.051119    4422 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:05:55.063004    4422 addons.go:510] duration metric: took 30.435949708s for enable addons: enabled=[storage-provisioner]
	I0731 12:05:53.940587    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:59.741832    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:59.741863    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:58.943007    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:58.943491    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:58.982512    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:58.982675    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:59.003305    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:59.003449    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:59.020896    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:59.020965    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:59.033877    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:59.033949    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:59.044631    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:59.044710    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:59.054646    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:59.054722    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:59.064792    4588 logs.go:276] 0 containers: []
	W0731 12:05:59.064806    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:59.064872    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:59.075487    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:59.075504    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:59.075509    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:59.092964    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:59.092975    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:59.131543    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:59.131553    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:59.146654    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:59.146665    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:59.158764    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:59.158776    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:59.174689    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:59.174702    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:59.199756    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:59.199763    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:59.211082    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:59.211093    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:59.245585    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:59.245596    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:59.257440    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:59.257451    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:59.268976    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:59.268986    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:59.281909    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:59.281921    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:59.286170    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:59.286179    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:59.299623    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:59.299632    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:59.314550    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:59.314559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:59.326703    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:59.326716    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:59.340603    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:59.340613    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:04.743194    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:04.743219    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:01.880848    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:09.744955    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:09.745021    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:06.883125    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:06.883274    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:06.897599    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:06.897679    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:06.909260    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:06.909337    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:06.919410    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:06.919470    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:06.930112    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:06.930176    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:06.941373    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:06.941440    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:06.952058    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:06.952119    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:06.961731    4588 logs.go:276] 0 containers: []
	W0731 12:06:06.961743    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:06.961807    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:06.972093    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:06.972111    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:06.972116    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:06.976791    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:06.976799    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:07.017561    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:07.017576    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:07.035732    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:07.035745    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:07.048127    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:07.048142    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:07.066933    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:07.066949    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:07.092530    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:07.092540    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:07.104518    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:07.104528    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:07.143243    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:07.143254    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:07.179924    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:07.179939    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:07.195018    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:07.195032    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:07.208163    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:07.208173    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:07.219650    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:07.219659    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:07.233462    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:07.233472    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:07.247302    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:07.247319    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:07.266696    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:07.266708    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:07.280212    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:07.280223    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:09.793302    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:14.747370    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:14.747414    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:14.795465    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:14.795586    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:14.808729    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:14.808802    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:14.819040    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:14.819105    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:14.829767    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:14.829834    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:14.840680    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:14.840747    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:14.851072    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:14.851143    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:14.861009    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:14.861073    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:14.871667    4588 logs.go:276] 0 containers: []
	W0731 12:06:14.871678    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:14.871736    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:14.882266    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:14.882282    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:14.882288    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:14.886478    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:14.886485    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:14.910802    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:14.910812    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:14.924505    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:14.924518    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:14.936110    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:14.936123    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:14.961204    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:14.961218    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:14.974802    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:14.974816    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:14.988505    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:14.988519    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:14.999970    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:14.999982    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:15.012002    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:15.012016    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:15.049060    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:15.049072    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:15.083558    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:15.083571    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:15.120413    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:15.120423    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:15.135780    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:15.135792    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:15.147731    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:15.147741    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:15.163717    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:15.163728    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:15.175812    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:15.175823    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:19.749630    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:19.749687    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:17.689210    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:24.751966    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:24.752174    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:24.768429    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:24.768517    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:24.780932    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:24.781013    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:24.791899    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:24.791972    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:24.802424    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:24.802494    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:24.833170    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:24.833247    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:24.844683    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:24.844753    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:24.854949    4422 logs.go:276] 0 containers: []
	W0731 12:06:24.854965    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:24.855025    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:24.865370    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:24.865387    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:24.865393    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:24.877388    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:24.877399    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:24.888909    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:24.888919    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:24.902273    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:24.902284    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:24.938470    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:24.938481    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:24.976139    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:24.976153    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:24.993833    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:24.993843    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:25.005408    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:25.005418    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:25.020129    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:25.020142    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:25.024615    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:25.024624    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:25.041743    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:25.041755    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:25.053524    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:25.053535    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:25.070813    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:25.070823    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:22.691452    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:22.691613    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:22.702971    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:22.703046    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:22.713624    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:22.713697    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:22.723992    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:22.724061    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:22.734500    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:22.734571    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:22.745542    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:22.745611    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:22.756436    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:22.756514    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:22.770966    4588 logs.go:276] 0 containers: []
	W0731 12:06:22.770978    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:22.771037    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:22.783850    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:22.783870    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:22.783875    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:22.794925    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:22.794935    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:22.820179    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:22.820189    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:22.837792    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:22.837805    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:22.854874    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:22.854884    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:22.877558    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:22.877569    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:22.891506    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:22.891517    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:22.930030    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:22.930042    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:22.942461    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:22.942475    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:22.954162    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:22.954174    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:22.992438    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:22.992446    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:22.996783    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:22.996788    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:23.010595    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:23.010608    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:23.022148    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:23.022159    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:23.039230    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:23.039241    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:23.075138    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:23.075149    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:23.086195    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:23.086206    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:25.601178    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:27.597531    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:30.603480    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:30.603727    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:30.623959    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:30.624053    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:30.639198    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:30.639270    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:30.651419    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:30.651495    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:30.662777    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:30.662848    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:30.673308    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:30.673379    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:30.686410    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:30.686482    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:30.696349    4588 logs.go:276] 0 containers: []
	W0731 12:06:30.696361    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:30.696421    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:30.706954    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:30.706972    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:30.706978    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:30.744323    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:30.744333    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:30.756250    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:30.756261    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:30.767317    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:30.767330    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:30.804791    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:30.804801    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:30.818723    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:30.818734    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:30.831230    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:30.831241    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:30.836002    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:30.836007    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:30.870142    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:30.870155    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:30.885403    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:30.885416    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:30.898752    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:30.898764    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:30.910235    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:30.910251    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:30.933837    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:30.933844    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:30.946182    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:30.946194    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:30.961077    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:30.961087    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:30.972331    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:30.972345    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:30.989044    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:30.989055    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:32.599944    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:32.600070    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:32.610965    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:32.611044    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:32.622060    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:32.622137    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:32.633203    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:32.633274    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:32.643847    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:32.643912    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:32.654516    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:32.654584    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:32.666607    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:32.666679    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:32.680630    4422 logs.go:276] 0 containers: []
	W0731 12:06:32.680642    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:32.680704    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:32.691138    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:32.691156    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:32.691161    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:32.705386    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:32.705397    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:32.719409    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:32.719420    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:32.730986    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:32.730996    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:32.748047    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:32.748058    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:32.772847    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:32.772855    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:32.785514    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:32.785525    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:32.790615    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:32.790624    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:32.825241    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:32.825251    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:32.840533    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:32.840546    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:32.852210    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:32.852221    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:32.863821    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:32.863830    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:32.875464    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:32.875474    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:35.410253    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:33.508818    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:40.412982    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:40.413199    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:40.432008    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:40.432103    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:40.446805    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:40.446886    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:40.459438    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:40.459503    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:40.469948    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:40.470021    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:40.480765    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:40.480833    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:40.490918    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:40.490980    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:40.500627    4422 logs.go:276] 0 containers: []
	W0731 12:06:40.500638    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:40.500688    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:40.510842    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:40.510856    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:40.510862    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:40.546536    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:40.546546    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:40.551072    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:40.551078    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:40.585216    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:40.585232    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:40.599628    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:40.599639    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:40.614141    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:40.614152    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:40.625937    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:40.625947    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:40.637714    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:40.637724    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:40.653458    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:40.653468    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:40.665064    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:40.665075    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:40.676977    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:40.676987    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:40.693730    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:40.693739    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:40.705064    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:40.705074    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:38.511199    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:38.511357    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:38.527760    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:38.527845    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:38.540953    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:38.541029    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:38.551986    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:38.552047    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:38.562665    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:38.562738    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:38.572904    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:38.572974    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:38.583405    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:38.583472    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:38.593509    4588 logs.go:276] 0 containers: []
	W0731 12:06:38.593519    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:38.593573    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:38.606793    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:38.606810    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:38.606816    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:38.618865    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:38.618875    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:38.623721    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:38.623728    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:38.665276    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:38.665286    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:38.679872    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:38.679882    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:38.690685    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:38.690696    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:38.702146    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:38.702156    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:38.713501    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:38.713512    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:38.736876    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:38.736883    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:38.752289    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:38.752303    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:38.766294    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:38.766303    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:38.777391    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:38.777401    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:38.815656    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:38.815664    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:38.850786    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:38.850797    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:38.865636    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:38.865646    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:38.879665    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:38.879676    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:38.891560    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:38.891570    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:41.410890    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:43.230273    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:46.411887    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:46.412065    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:46.433700    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:46.433786    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:46.448551    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:46.448618    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:46.459452    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:46.459519    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:46.469778    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:46.469851    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:46.480553    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:46.480618    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:46.499380    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:46.499452    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:46.510315    4588 logs.go:276] 0 containers: []
	W0731 12:06:46.510327    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:46.510390    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:46.520518    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:46.520538    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:46.520544    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:46.532588    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:46.532600    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:46.550018    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:46.550030    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:46.561372    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:46.561386    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:46.585168    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:46.585179    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:46.596583    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:46.596594    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:48.232741    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:48.233221    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:48.274772    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:48.274904    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:48.295691    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:48.295786    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:48.310940    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:48.311014    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:48.323684    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:48.323760    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:48.334677    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:48.334747    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:48.346112    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:48.346188    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:48.357266    4422 logs.go:276] 0 containers: []
	W0731 12:06:48.357277    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:48.357336    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:48.367494    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:48.367511    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:48.367516    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:48.402570    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:48.402582    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:48.416561    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:48.416572    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:48.428298    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:48.428309    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:48.445313    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:48.445328    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:48.471277    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:48.471290    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:48.482532    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:48.482546    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:48.516116    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:48.516127    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:48.530208    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:48.530220    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:48.542078    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:48.542092    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:48.556736    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:48.556747    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:48.573841    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:48.573851    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:48.585142    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:48.585155    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:51.091637    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:46.635815    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:46.635825    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:46.649367    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:46.649381    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:46.661306    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:46.661323    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:46.672474    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:46.672485    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:46.687288    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:46.687300    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:46.698841    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:46.698851    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:46.737539    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:46.737549    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:46.752144    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:46.752154    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:46.789828    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:46.789840    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:46.793948    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:46.793956    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:46.808364    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:46.808375    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:49.324293    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:56.093889    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:56.094242    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:56.135680    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:06:56.135828    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:56.156830    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:06:56.156929    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:56.172036    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:06:56.172119    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:56.184585    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:06:56.184655    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:56.195422    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:06:56.195487    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:56.205802    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:06:56.205876    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:56.224877    4422 logs.go:276] 0 containers: []
	W0731 12:06:56.224889    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:56.224952    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:56.235392    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:06:56.235411    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:56.235416    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:56.270733    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:56.270745    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:56.275120    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:06:56.275128    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:06:56.289368    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:06:56.289379    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:06:56.301357    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:06:56.301371    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:06:56.313276    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:56.313289    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:56.338207    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:06:56.338217    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:56.349487    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:56.349501    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:56.384613    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:06:56.384626    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:06:54.326501    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:54.326693    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:54.342310    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:54.342402    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:54.358758    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:54.358836    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:54.373159    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:54.373230    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:54.383869    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:54.383946    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:54.394615    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:54.394686    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:54.404723    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:54.404800    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:54.415512    4588 logs.go:276] 0 containers: []
	W0731 12:06:54.415523    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:54.415583    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:54.426806    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:54.426853    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:54.426859    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:54.461605    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:54.461616    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:54.505418    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:54.505431    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:54.516804    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:54.516816    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:54.521097    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:54.521104    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:54.533426    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:54.533437    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:54.558336    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:54.558345    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:54.598256    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:54.598277    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:54.613224    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:54.613235    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:54.624970    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:54.624983    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:54.637168    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:54.637179    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:54.654389    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:54.654401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:54.668534    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:54.668549    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:54.680174    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:54.680186    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:54.695951    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:54.695961    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:54.715864    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:54.715879    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:54.729776    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:54.729786    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:56.398678    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:06:56.398689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:06:56.410380    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:06:56.410391    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:06:56.422170    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:06:56.422183    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:06:56.440005    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:06:56.440016    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:06:58.962691    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:57.248330    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:03.965522    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:03.965828    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:04.003994    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:04.004119    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:04.024403    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:04.024490    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:04.039586    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:04.039665    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:04.051475    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:04.051540    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:04.065335    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:04.065407    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:04.076529    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:04.076603    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:04.091516    4422 logs.go:276] 0 containers: []
	W0731 12:07:04.091530    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:04.091591    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:04.102240    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:04.102255    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:04.102261    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:04.115072    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:04.115082    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:04.149151    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:04.149162    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:04.164344    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:04.164357    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:04.180178    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:04.180190    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:04.191969    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:04.191980    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:04.206641    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:04.206653    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:04.224724    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:04.224735    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:04.229326    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:04.229334    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:04.266717    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:04.266729    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:04.281149    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:04.281162    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:04.293774    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:04.293791    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:04.305169    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:04.305179    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:02.249955    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:02.250192    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:02.270048    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:02.270151    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:02.284326    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:02.284419    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:02.298221    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:02.298291    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:02.309307    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:02.309373    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:02.320159    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:02.320225    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:02.330744    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:02.330841    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:02.347395    4588 logs.go:276] 0 containers: []
	W0731 12:07:02.347406    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:02.347463    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:02.357375    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:02.357392    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:02.357397    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:02.396115    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:02.396124    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:02.407451    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:02.407462    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:02.423351    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:02.423364    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:02.438793    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:02.438804    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:02.462181    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:02.462195    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:02.475975    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:02.475985    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:02.500117    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:02.500125    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:02.512905    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:02.512917    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:02.529918    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:02.529929    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:02.541584    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:02.541598    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:02.553839    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:02.553852    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:02.593930    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:02.593944    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:02.608115    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:02.608133    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:02.622544    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:02.622558    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:02.636225    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:02.636236    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:02.640994    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:02.641001    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:05.181506    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:06.832728    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:10.183697    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:10.183958    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:10.205112    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:10.205217    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:10.220609    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:10.220689    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:10.233129    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:10.233212    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:10.244437    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:10.244503    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:10.255930    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:10.256002    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:10.270551    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:10.270620    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:10.280907    4588 logs.go:276] 0 containers: []
	W0731 12:07:10.280921    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:10.280981    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:10.291471    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:10.291490    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:10.291497    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:10.296076    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:10.296082    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:10.318733    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:10.318745    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:10.330179    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:10.330190    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:10.370781    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:10.370794    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:10.390030    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:10.390041    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:10.401207    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:10.401220    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:10.418166    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:10.418175    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:10.431006    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:10.431017    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:10.469740    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:10.469757    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:10.504063    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:10.504073    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:10.525135    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:10.525147    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:10.541057    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:10.541070    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:10.558339    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:10.558350    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:10.571662    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:10.571673    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:10.582641    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:10.582652    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:10.605326    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:10.605334    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:11.835351    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:11.835707    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:11.873402    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:11.873542    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:11.895287    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:11.895375    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:11.910229    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:11.910292    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:11.922732    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:11.922792    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:11.934349    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:11.934421    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:11.945363    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:11.945423    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:11.956270    4422 logs.go:276] 0 containers: []
	W0731 12:07:11.956282    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:11.956346    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:11.968612    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:11.968626    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:11.968633    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:12.004406    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:12.004414    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:12.017023    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:12.017034    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:12.032960    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:12.032971    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:12.045233    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:12.045245    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:12.057595    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:12.057607    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:12.082491    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:12.082501    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:12.086734    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:12.086742    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:12.126235    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:12.126246    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:12.141035    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:12.141046    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:12.155058    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:12.155070    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:12.167407    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:12.167418    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:12.185722    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:12.185733    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:14.699944    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:13.119072    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:19.702166    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:19.702433    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:19.725749    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:19.725859    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:19.741296    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:19.741370    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:19.753979    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:19.754054    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:19.766653    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:19.766720    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:19.777458    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:19.777525    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:19.788037    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:19.788113    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:19.798215    4422 logs.go:276] 0 containers: []
	W0731 12:07:19.798228    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:19.798286    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:19.809015    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:19.809032    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:19.809039    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:19.823417    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:19.823428    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:19.839561    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:19.839572    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:19.864373    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:19.864382    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:19.878749    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:19.878762    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:19.890444    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:19.890455    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:19.924844    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:19.924855    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:19.939426    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:19.939440    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:19.954562    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:19.954573    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:19.972062    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:19.972075    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:19.985234    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:19.985245    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:19.997286    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:19.997298    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:20.033258    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:20.033267    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:18.121862    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:18.122204    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:18.154754    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:18.154885    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:18.175143    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:18.175242    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:18.189466    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:18.189548    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:18.201197    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:18.201274    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:18.212018    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:18.212084    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:18.222689    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:18.222758    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:18.233252    4588 logs.go:276] 0 containers: []
	W0731 12:07:18.233264    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:18.233324    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:18.244250    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:18.244267    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:18.244273    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:18.249218    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:18.249228    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:18.286364    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:18.286375    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:18.298185    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:18.298198    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:18.315729    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:18.315741    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:18.327664    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:18.327674    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:18.352974    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:18.352983    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:18.368860    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:18.368873    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:18.395383    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:18.395395    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:18.412247    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:18.412257    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:18.450749    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:18.450760    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:18.464966    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:18.464977    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:18.502913    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:18.502923    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:18.519265    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:18.519275    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:18.538309    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:18.538320    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:18.550030    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:18.550041    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:18.561461    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:18.561473    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:21.079611    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:22.539817    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:26.082308    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:26.082739    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:26.129863    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:26.130008    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:26.150723    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:26.150828    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:26.167233    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:26.167315    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:26.180093    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:26.180168    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:26.190588    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:26.190660    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:26.201805    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:26.201880    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:26.212499    4588 logs.go:276] 0 containers: []
	W0731 12:07:26.212509    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:26.212564    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:26.223015    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:26.223032    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:26.223039    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:26.247253    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:26.247262    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:26.260266    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:26.260277    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:26.275452    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:26.275463    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:26.287252    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:26.287264    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:26.299463    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:26.299472    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:26.314947    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:26.314958    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:26.327807    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:26.327822    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:26.365335    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:26.365356    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:26.378698    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:26.378709    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:26.382605    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:26.382611    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:26.400567    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:26.400581    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:26.414471    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:26.414481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:26.428541    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:26.428551    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:26.466610    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:26.466622    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:26.481007    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:26.481019    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:26.519589    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:26.519599    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:27.542377    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:27.542538    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:27.559196    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:27.559286    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:27.572934    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:27.573007    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:27.583930    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:27.583998    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:27.595042    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:27.595115    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:27.606115    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:27.606186    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:27.616800    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:27.616865    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:27.627184    4422 logs.go:276] 0 containers: []
	W0731 12:07:27.627196    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:27.627257    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:27.637621    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:27.637635    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:27.637641    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:27.673945    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:27.673961    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:27.688544    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:27.688556    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:27.699741    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:27.699755    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:27.711218    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:27.711233    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:27.722454    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:27.722468    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:27.726952    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:27.726958    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:27.762037    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:27.762050    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:27.776028    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:27.776039    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:27.790882    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:27.790893    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:27.802884    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:27.802895    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:27.819871    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:27.819881    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:27.831505    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:27.831516    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:30.356466    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:29.035934    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:35.357915    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:35.358277    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:35.394673    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:35.394801    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:35.413197    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:35.413295    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:35.427462    4422 logs.go:276] 2 containers: [6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:35.427540    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:35.439638    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:35.439711    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:35.452395    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:35.452467    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:35.467570    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:35.467632    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:35.478085    4422 logs.go:276] 0 containers: []
	W0731 12:07:35.478097    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:35.478155    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:35.489335    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:35.489351    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:35.489358    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:35.525012    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:35.525026    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:35.543054    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:35.543066    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:35.554640    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:35.554649    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:35.577747    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:35.577757    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:35.592762    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:35.592775    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:35.628188    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:35.628198    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:35.632704    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:35.632712    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:35.648181    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:35.648191    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:35.662549    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:35.662562    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:35.674918    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:35.674932    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:35.686435    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:35.686447    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:35.701328    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:35.701341    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:34.038650    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:34.038933    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:34.072543    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:34.072704    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:34.091849    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:34.091936    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:34.105382    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:34.105460    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:34.117802    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:34.117862    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:34.128433    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:34.128493    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:34.139371    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:34.139442    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:34.150608    4588 logs.go:276] 0 containers: []
	W0731 12:07:34.150621    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:34.150682    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:34.161930    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:34.161950    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:34.161956    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:34.168389    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:34.168397    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:34.186834    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:34.186846    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:34.224099    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:34.224110    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:34.237636    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:34.237652    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:34.277698    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:34.277711    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:34.291479    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:34.291490    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:34.305550    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:34.305559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:34.324678    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:34.324690    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:34.335737    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:34.335748    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:34.372446    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:34.372458    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:34.384349    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:34.384361    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:34.408951    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:34.408960    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:34.421540    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:34.421554    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:34.433508    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:34.433521    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:34.451775    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:34.451790    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:34.465579    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:34.465589    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:38.215047    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:36.978536    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:43.216762    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:43.216978    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:43.241617    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:43.241709    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:43.255277    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:43.255345    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:43.270322    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:43.270384    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:43.280533    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:43.280593    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:43.291011    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:43.291069    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:43.301238    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:43.301298    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:43.311603    4422 logs.go:276] 0 containers: []
	W0731 12:07:43.311612    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:43.311664    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:43.321887    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:43.321907    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:43.321912    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:43.333227    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:43.333241    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:43.345187    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:43.345198    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:43.357696    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:43.357707    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:43.362083    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:43.362090    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:43.397507    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:43.397518    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:43.419286    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:43.419298    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:43.443461    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:43.443475    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:43.454731    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:43.454743    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:43.466245    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:43.466255    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:43.483667    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:43.483677    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:43.495238    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:43.495249    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:43.513551    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:43.513561    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:43.528195    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:43.528205    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:43.562316    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:43.562326    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:46.082155    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:41.979453    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:41.979662    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:41.999904    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:42.000010    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:42.014967    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:42.015039    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:42.027094    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:42.027171    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:42.038293    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:42.038366    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:42.048849    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:42.048915    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:42.062588    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:42.062663    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:42.073929    4588 logs.go:276] 0 containers: []
	W0731 12:07:42.073940    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:42.073997    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:42.083938    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:42.083957    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:42.083962    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:42.123444    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:42.123453    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:42.144029    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:42.144040    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:42.159703    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:42.159714    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:42.176470    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:42.176481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:42.188082    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:42.188094    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:42.192745    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:42.192752    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:42.228422    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:42.228433    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:42.268458    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:42.268471    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:42.280236    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:42.280247    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:42.294358    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:42.294368    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:42.305492    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:42.305504    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:42.327923    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:42.327930    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:42.345979    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:42.345989    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:42.359347    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:42.359358    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:42.370852    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:42.370864    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:42.383091    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:42.383105    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:44.896815    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:51.084487    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:51.084688    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:51.103990    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:51.104074    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:51.117913    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:51.117979    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:51.129050    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:51.129116    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:51.140699    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:51.140768    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:51.151233    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:51.151296    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:51.162247    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:51.162316    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:51.172927    4422 logs.go:276] 0 containers: []
	W0731 12:07:51.172939    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:51.172998    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:51.183949    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:51.183967    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:51.183973    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:51.198956    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:51.198970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:51.211208    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:51.211218    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:51.215732    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:51.215741    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:51.230581    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:51.230592    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:51.242238    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:51.242249    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:51.254681    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:51.254692    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:51.290577    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:51.290587    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:51.304336    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:51.304347    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:51.316020    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:51.316031    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:51.327471    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:51.327482    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:51.339061    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:51.339071    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:51.372885    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:51.372894    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:51.384930    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:51.384942    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:49.898946    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:49.899111    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:49.914468    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:49.914550    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:49.926708    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:49.926779    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:49.937683    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:49.937746    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:49.948118    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:49.948186    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:49.958506    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:49.958565    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:49.968980    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:49.969046    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:49.982700    4588 logs.go:276] 0 containers: []
	W0731 12:07:49.982711    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:49.982768    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:49.993379    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:49.993396    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:49.993401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:50.010901    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:50.010913    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:50.022418    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:50.022428    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:50.056874    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:50.056885    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:50.071023    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:50.071034    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:50.082238    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:50.082248    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:50.106065    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:50.106077    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:50.117903    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:50.117913    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:50.129826    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:50.129836    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:50.141029    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:50.141039    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:50.156614    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:50.156625    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:50.175388    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:50.175399    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:50.188598    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:50.188609    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:50.227512    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:50.227523    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:50.243430    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:50.243440    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:50.256427    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:50.256438    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:50.295748    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:50.295758    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:51.410822    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:51.410834    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:53.931472    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:52.802227    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:58.933217    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:58.933511    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:58.962369    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:07:58.962506    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:58.980872    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:07:58.980961    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:58.994648    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:07:58.994730    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:59.006102    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:07:59.006176    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:59.016592    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:07:59.016660    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:59.027228    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:07:59.027306    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:59.037658    4422 logs.go:276] 0 containers: []
	W0731 12:07:59.037672    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:59.037731    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:59.050225    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:07:59.050242    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:59.050247    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:59.085721    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:07:59.085732    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:07:59.099989    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:07:59.100001    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:07:59.117569    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:07:59.117579    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:07:59.132707    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:07:59.132718    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:07:59.144925    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:07:59.144935    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:07:59.156611    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:07:59.156621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:07:59.170986    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:07:59.171001    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:07:59.182670    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:07:59.182681    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:07:59.200385    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:59.200398    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:59.234053    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:59.234060    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:59.238324    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:07:59.238333    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:07:59.250042    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:07:59.250054    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:07:59.261459    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:59.261471    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:59.285611    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:07:59.285621    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:57.804368    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:57.804506    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:57.815744    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:57.815808    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:57.826481    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:57.826539    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:57.836988    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:57.837058    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:57.847512    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:57.847580    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:57.858394    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:57.858454    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:57.868323    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:57.868383    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:57.878490    4588 logs.go:276] 0 containers: []
	W0731 12:07:57.878501    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:57.878560    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:57.888531    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:57.888547    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:57.888552    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:57.900804    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:57.900815    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:57.917523    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:57.917534    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:57.929405    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:57.929417    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:57.968241    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:57.968250    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:58.005336    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:58.005350    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:58.019546    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:58.019558    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:58.056651    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:58.056661    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:58.071114    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:58.071127    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:58.082114    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:58.082125    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:58.094483    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:58.094494    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:58.099184    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:58.099190    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:58.111874    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:58.111885    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:58.127115    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:58.127126    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:58.144337    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:58.144348    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:58.155841    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:58.155851    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:58.180792    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:58.180803    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:00.695235    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:01.799274    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:05.697610    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:05.697876    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:05.723219    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:05.723316    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:05.739479    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:05.739564    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:05.752441    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:05.752533    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:05.763378    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:05.763449    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:05.781029    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:05.781106    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:05.792072    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:05.792136    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:05.804845    4588 logs.go:276] 0 containers: []
	W0731 12:08:05.804856    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:05.804917    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:05.815664    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:05.815682    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:05.815689    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:05.855428    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:05.855447    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:05.860755    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:05.860771    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:05.881279    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:05.881290    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:05.892723    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:05.892736    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:05.903794    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:05.903804    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:05.917906    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:05.917915    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:05.932480    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:05.932497    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:05.943737    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:05.943751    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:05.955922    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:05.955932    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:05.967631    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:05.967641    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:06.003538    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:06.003551    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:06.015126    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:06.015136    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:06.033575    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:06.033587    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:06.047167    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:06.047180    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:06.083726    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:06.083739    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:06.097886    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:06.097897    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:06.801573    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:06.801919    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:06.841148    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:06.841282    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:06.860823    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:06.860926    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:06.875626    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:06.875709    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:06.887510    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:06.887582    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:06.898436    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:06.898512    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:06.909117    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:06.909184    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:06.924361    4422 logs.go:276] 0 containers: []
	W0731 12:08:06.924374    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:06.924431    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:06.934751    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:06.934769    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:06.934774    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:06.975448    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:06.975462    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:06.990995    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:06.991007    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:07.004953    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:07.004964    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:07.017591    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:07.017602    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:07.029578    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:07.029589    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:07.064663    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:07.064669    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:07.079852    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:07.079864    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:07.097168    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:07.097179    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:07.109262    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:07.109271    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:07.113868    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:07.113877    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:07.125789    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:07.125799    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:07.138151    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:07.138165    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:07.153079    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:07.153089    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:07.171841    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:07.171854    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:09.701309    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:08.623699    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:14.703654    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:14.703898    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:14.722859    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:14.722931    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:14.739410    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:14.739486    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:14.750008    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:14.750074    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:14.760813    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:14.760891    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:14.772375    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:14.772449    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:14.783274    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:14.783339    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:14.793793    4422 logs.go:276] 0 containers: []
	W0731 12:08:14.793805    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:14.793865    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:14.804295    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:14.804314    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:14.804320    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:14.840001    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:14.840011    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:14.844904    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:14.844914    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:14.883120    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:14.883131    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:14.895077    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:14.895087    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:14.910715    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:14.910727    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:14.928247    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:14.928258    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:14.940426    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:14.940438    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:14.959051    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:14.959061    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:14.970862    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:14.970874    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:14.986430    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:14.986440    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:14.997755    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:14.997764    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:15.023112    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:15.023119    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:15.037687    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:15.037702    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:15.049179    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:15.049191    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:13.626068    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:13.626419    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:13.660610    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:13.660747    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:13.680251    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:13.680349    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:13.695429    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:13.695517    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:13.708400    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:13.708485    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:13.720569    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:13.720635    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:13.732052    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:13.732116    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:13.742412    4588 logs.go:276] 0 containers: []
	W0731 12:08:13.742424    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:13.742486    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:13.753070    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:13.753086    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:13.753093    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:13.767799    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:13.767815    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:13.779131    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:13.779143    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:13.791429    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:13.791440    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:13.806516    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:13.806525    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:13.818309    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:13.818325    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:13.857206    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:13.857218    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:13.862759    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:13.862766    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:13.897209    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:13.897220    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:13.935231    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:13.935245    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:13.950689    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:13.950700    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:13.961724    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:13.961734    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:13.983547    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:13.983555    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:14.000972    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:14.000982    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:14.012616    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:14.012628    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:14.027070    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:14.027082    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:14.041235    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:14.041245    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:16.554857    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:17.562705    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:21.556464    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:21.556893    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:21.595072    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:21.595213    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:22.563262    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:22.563404    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:22.574987    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:22.575064    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:22.586713    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:22.586793    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:22.597265    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:22.597341    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:22.608344    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:22.608409    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:22.622607    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:22.622669    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:22.633395    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:22.633468    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:22.643784    4422 logs.go:276] 0 containers: []
	W0731 12:08:22.643797    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:22.643855    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:22.653850    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:22.653866    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:22.653870    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:22.665677    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:22.665686    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:22.670075    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:22.670081    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:22.684303    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:22.684313    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:22.696608    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:22.696620    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:22.709983    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:22.709995    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:22.723113    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:22.723125    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:22.761307    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:22.761321    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:22.773168    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:22.773180    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:22.788550    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:22.788560    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:22.800921    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:22.800932    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:22.816058    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:22.816069    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:22.832943    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:22.832953    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:22.857980    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:22.857990    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:22.902396    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:22.902405    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:25.418480    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:21.616768    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:21.616887    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:21.632210    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:21.632295    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:21.644603    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:21.644680    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:21.656493    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:21.656565    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:21.667414    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:21.667487    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:21.677750    4588 logs.go:276] 0 containers: []
	W0731 12:08:21.677763    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:21.677822    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:21.688270    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:21.688290    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:21.688297    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:21.692488    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:21.692496    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:21.703824    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:21.703836    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:21.728385    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:21.728395    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:21.752124    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:21.752137    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:21.767965    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:21.767975    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:21.779792    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:21.779805    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:21.794256    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:21.794267    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:21.808424    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:21.808434    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:21.823453    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:21.823465    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:21.836913    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:21.836924    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:21.848330    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:21.848342    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:21.859809    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:21.859819    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:21.898412    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:21.898424    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:21.936070    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:21.936084    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:21.974342    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:21.974353    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:21.985777    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:21.985787    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:24.499193    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:30.420754    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:30.420858    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:30.433572    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:30.433650    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:30.452571    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:30.452647    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:30.465182    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:30.465258    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:30.483737    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:30.483806    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:30.494491    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:30.494555    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:30.504960    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:30.505029    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:30.515342    4422 logs.go:276] 0 containers: []
	W0731 12:08:30.515354    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:30.515418    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:30.531151    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:30.531170    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:30.531175    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:30.555626    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:30.555633    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:30.569553    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:30.569563    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:30.585209    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:30.585219    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:30.596659    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:30.596671    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:30.610208    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:30.610222    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:30.646085    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:30.646099    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:30.650752    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:30.650759    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:30.662196    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:30.662207    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:30.674028    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:30.674040    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:30.686352    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:30.686365    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:30.697671    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:30.697687    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:30.715052    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:30.715062    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:30.750267    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:30.750276    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:30.761869    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:30.761880    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:29.501449    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:29.501881    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:29.541852    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:29.541996    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:29.565283    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:29.565395    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:29.580570    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:29.580653    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:29.593317    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:29.593392    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:29.604400    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:29.604467    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:29.620091    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:29.620162    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:29.636216    4588 logs.go:276] 0 containers: []
	W0731 12:08:29.636229    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:29.636288    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:29.647420    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:29.647437    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:29.647444    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:29.660974    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:29.660985    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:29.673135    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:29.673145    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:29.685367    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:29.685377    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:29.723197    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:29.723217    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:29.727525    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:29.727534    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:29.739470    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:29.739481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:29.755441    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:29.755453    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:29.773330    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:29.773341    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:29.784927    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:29.784937    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:29.806789    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:29.806802    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:29.845355    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:29.845369    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:29.856944    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:29.856955    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:29.868376    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:29.868390    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:29.911964    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:29.911978    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:29.926201    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:29.926210    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:29.940029    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:29.940040    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:33.278425    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:32.458981    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:38.280684    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:38.280820    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:38.291520    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:38.291592    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:38.303754    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:38.303821    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:38.315796    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:38.315867    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:38.326383    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:38.326445    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:38.337399    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:38.337464    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:38.348568    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:38.348628    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:38.358470    4422 logs.go:276] 0 containers: []
	W0731 12:08:38.358480    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:38.358545    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:38.369714    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:38.369731    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:38.369737    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:38.374372    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:38.374379    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:38.389077    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:38.389089    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:38.400705    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:38.400715    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:38.435418    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:38.435430    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:38.450318    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:38.450330    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:38.464841    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:38.464854    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:38.477103    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:38.477115    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:38.494264    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:38.494276    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:38.505974    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:38.505986    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:38.517900    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:38.517911    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:38.532522    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:38.532535    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:38.568361    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:38.568375    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:38.582788    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:38.582798    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:38.606367    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:38.606375    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:41.120302    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:37.461296    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:37.461763    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:37.499271    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:37.499429    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:37.524082    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:37.524194    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:37.538783    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:37.538872    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:37.550619    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:37.550685    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:37.561042    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:37.561108    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:37.571221    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:37.571286    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:37.581629    4588 logs.go:276] 0 containers: []
	W0731 12:08:37.581641    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:37.581701    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:37.592579    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:37.592597    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:37.592603    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:37.609903    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:37.609913    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:37.632728    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:37.632736    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:37.668778    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:37.668789    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:37.680708    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:37.680719    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:37.696402    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:37.696412    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:37.708226    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:37.708237    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:37.720314    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:37.720329    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:37.758882    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:37.758896    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:37.776241    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:37.776253    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:37.789985    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:37.789996    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:37.801881    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:37.801892    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:37.816063    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:37.816077    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:37.830998    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:37.831010    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:37.869569    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:37.869582    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:37.874273    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:37.874284    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:37.887856    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:37.887874    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:40.404644    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:46.122636    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:46.122758    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:46.134232    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:46.134319    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:46.145511    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:46.145589    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:46.156777    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:46.156853    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:46.168000    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:46.168067    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:46.178678    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:46.178755    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:46.190168    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:46.190242    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:46.201656    4422 logs.go:276] 0 containers: []
	W0731 12:08:46.201668    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:46.201730    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:46.213287    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:46.213309    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:46.213316    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:46.226432    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:46.226444    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:46.262506    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:46.262526    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:46.267156    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:46.267165    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:46.301633    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:46.301647    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:46.314149    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:46.314161    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:46.338051    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:46.338061    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:46.352384    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:46.352399    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:46.366256    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:46.366267    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:46.384926    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:46.384940    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:45.407336    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:45.407409    4588 kubeadm.go:597] duration metric: took 4m4.175951416s to restartPrimaryControlPlane
	W0731 12:08:45.407485    4588 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:08:45.407517    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:08:46.483656    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076142042s)
	I0731 12:08:46.483709    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:08:46.488535    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:08:46.491462    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:08:46.494933    4588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:08:46.494940    4588 kubeadm.go:157] found existing configuration files:
	
	I0731 12:08:46.494976    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf
	I0731 12:08:46.497430    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:08:46.497457    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:08:46.500768    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf
	I0731 12:08:46.503634    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:08:46.503657    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:08:46.506331    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf
	I0731 12:08:46.509327    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:08:46.509352    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:08:46.512127    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf
	I0731 12:08:46.514695    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:08:46.514723    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:08:46.517624    4588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:08:46.535067    4588 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:08:46.535097    4588 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:08:46.584073    4588 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:08:46.584135    4588 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:08:46.584188    4588 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:08:46.634798    4588 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:08:46.637733    4588 out.go:204]   - Generating certificates and keys ...
	I0731 12:08:46.637795    4588 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:08:46.637837    4588 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:08:46.637881    4588 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:08:46.637917    4588 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:08:46.637955    4588 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:08:46.637991    4588 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:08:46.638025    4588 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:08:46.638059    4588 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:08:46.638098    4588 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:08:46.638140    4588 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:08:46.638177    4588 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:08:46.638208    4588 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:08:46.729165    4588 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:08:46.790443    4588 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:08:46.912439    4588 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:08:47.014664    4588 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:08:47.040835    4588 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:08:47.041261    4588 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:08:47.041287    4588 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:08:47.128023    4588 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:08:46.397810    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:46.397821    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:46.413117    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:46.413131    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:46.428675    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:46.428689    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:46.441083    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:46.441095    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:46.461776    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:46.461788    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:48.979964    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:47.136181    4588 out.go:204]   - Booting up control plane ...
	I0731 12:08:47.136341    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:08:47.136396    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:08:47.136441    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:08:47.136596    4588 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:08:47.136703    4588 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:08:52.133448    4588 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004445 seconds
	I0731 12:08:52.133564    4588 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:08:52.139325    4588 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:08:52.649078    4588 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:08:52.649222    4588 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-532000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:08:53.155874    4588 kubeadm.go:310] [bootstrap-token] Using token: trl3uf.qsefqlsp7p6ue2xn
	I0731 12:08:53.159383    4588 out.go:204]   - Configuring RBAC rules ...
	I0731 12:08:53.159436    4588 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:08:53.159523    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:08:53.163260    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:08:53.164255    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:08:53.165041    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:08:53.165911    4588 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:08:53.169581    4588 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:08:53.349712    4588 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:08:53.560368    4588 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:08:53.560858    4588 kubeadm.go:310] 
	I0731 12:08:53.560893    4588 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:08:53.560901    4588 kubeadm.go:310] 
	I0731 12:08:53.560938    4588 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:08:53.560941    4588 kubeadm.go:310] 
	I0731 12:08:53.560954    4588 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:08:53.560987    4588 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:08:53.561016    4588 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:08:53.561019    4588 kubeadm.go:310] 
	I0731 12:08:53.561046    4588 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:08:53.561049    4588 kubeadm.go:310] 
	I0731 12:08:53.561072    4588 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:08:53.561075    4588 kubeadm.go:310] 
	I0731 12:08:53.561104    4588 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:08:53.561142    4588 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:08:53.561179    4588 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:08:53.561184    4588 kubeadm.go:310] 
	I0731 12:08:53.561229    4588 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:08:53.561264    4588 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:08:53.561267    4588 kubeadm.go:310] 
	I0731 12:08:53.561310    4588 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token trl3uf.qsefqlsp7p6ue2xn \
	I0731 12:08:53.561355    4588 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d \
	I0731 12:08:53.561374    4588 kubeadm.go:310] 	--control-plane 
	I0731 12:08:53.561376    4588 kubeadm.go:310] 
	I0731 12:08:53.561424    4588 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:08:53.561428    4588 kubeadm.go:310] 
	I0731 12:08:53.561469    4588 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token trl3uf.qsefqlsp7p6ue2xn \
	I0731 12:08:53.561521    4588 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d 
	I0731 12:08:53.561691    4588 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:08:53.561715    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:08:53.561724    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:08:53.565801    4588 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:08:53.572998    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:08:53.576367    4588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:08:53.581194    4588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:08:53.581247    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:08:53.581258    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-532000 minikube.k8s.io/updated_at=2024_07_31T12_08_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=stopped-upgrade-532000 minikube.k8s.io/primary=true
	I0731 12:08:53.585034    4588 ops.go:34] apiserver oom_adj: -16
	I0731 12:08:53.622700    4588 kubeadm.go:1113] duration metric: took 41.498125ms to wait for elevateKubeSystemPrivileges
	I0731 12:08:53.622714    4588 kubeadm.go:394] duration metric: took 4m12.404947s to StartCluster
	I0731 12:08:53.622724    4588 settings.go:142] acquiring lock: {Name:mk8345ab3fe8ab5ac7063435ec374691aa431221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:08:53.622811    4588 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:08:53.623258    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:08:53.623483    4588 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:08:53.623498    4588 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:08:53.623538    4588 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-532000"
	I0731 12:08:53.623549    4588 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-532000"
	W0731 12:08:53.623553    4588 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:08:53.623564    4588 host.go:66] Checking if "stopped-upgrade-532000" exists ...
	I0731 12:08:53.623570    4588 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-532000"
	I0731 12:08:53.623584    4588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-532000"
	I0731 12:08:53.623630    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:08:53.624746    4588 kapi.go:59] client config for stopped-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bd41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:08:53.624865    4588 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-532000"
	W0731 12:08:53.624870    4588 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:08:53.624880    4588 host.go:66] Checking if "stopped-upgrade-532000" exists ...
	I0731 12:08:53.628027    4588 out.go:177] * Verifying Kubernetes components...
	I0731 12:08:53.628333    4588 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:08:53.631157    4588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:08:53.631163    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:08:53.633951    4588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:08:53.980378    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:53.980537    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:53.993034    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:08:53.993101    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:54.004796    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:08:54.004862    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:54.022079    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:08:54.022152    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:54.035518    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:08:54.035587    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:54.047792    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:08:54.047864    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:54.059073    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:08:54.059143    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:54.069719    4422 logs.go:276] 0 containers: []
	W0731 12:08:54.069734    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:54.069794    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:54.081489    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:08:54.081506    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:08:54.081513    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:08:54.094240    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:08:54.094250    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:54.107449    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:54.107468    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:54.112494    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:08:54.112507    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:08:54.128844    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:08:54.128860    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:08:54.142897    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:08:54.142909    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:08:54.159218    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:08:54.159236    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:08:54.173220    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:08:54.173235    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:08:54.188968    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:54.188980    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:54.213976    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:08:54.213986    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:08:54.226131    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:54.226141    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:54.265042    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:08:54.265054    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:08:54.277797    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:08:54.277809    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:08:54.296141    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:08:54.296152    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:08:54.308019    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:54.308031    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:53.637788    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:08:53.641966    4588 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:08:53.641973    4588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:08:53.641979    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:08:53.738599    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:08:53.743775    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:08:53.743817    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:08:53.747505    4588 api_server.go:72] duration metric: took 124.009167ms to wait for apiserver process to appear ...
	I0731 12:08:53.747512    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:08:53.747518    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:53.790745    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:08:53.820616    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:08:56.843830    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:58.749567    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:58.749604    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:01.846015    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:01.846176    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:01.858607    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:01.858680    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:01.869867    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:01.869938    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:01.880677    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:01.880746    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:01.896646    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:01.896707    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:01.907159    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:01.907235    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:01.917457    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:01.917531    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:01.927929    4422 logs.go:276] 0 containers: []
	W0731 12:09:01.927940    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:01.927997    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:01.939038    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:01.939057    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:01.939063    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:01.944152    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:01.944167    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:01.958572    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:01.958583    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:01.970830    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:01.970841    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:01.983303    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:01.983316    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:02.006475    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:02.006482    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:02.040221    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:02.040241    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:02.075564    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:02.075582    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:02.088044    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:02.088055    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:02.103389    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:02.103398    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:02.117780    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:02.117790    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:02.129465    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:02.129476    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:02.141519    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:02.141530    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:02.178585    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:02.178597    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:02.195128    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:02.195142    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:04.709314    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:03.749844    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:03.749884    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:09.710422    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:09.710586    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:09.725732    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:09.725812    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:09.743555    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:09.743622    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:09.754574    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:09.754647    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:09.764722    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:09.764788    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:09.775185    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:09.775246    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:09.787850    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:09.787922    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:09.798015    4422 logs.go:276] 0 containers: []
	W0731 12:09:09.798029    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:09.798095    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:09.809062    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:09.809079    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:09.809084    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:09.820554    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:09.820563    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:09.825524    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:09.825531    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:09.840338    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:09.840349    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:09.855151    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:09.855160    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:09.867188    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:09.867199    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:09.885211    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:09.885223    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:09.901579    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:09.901590    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:09.936210    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:09.936224    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:09.948458    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:09.948467    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:09.960049    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:09.960059    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:09.983538    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:09.983546    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:10.000146    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:10.000158    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:10.011850    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:10.011863    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:10.045523    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:10.045532    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:08.750126    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:08.750165    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:12.561132    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:13.750959    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:13.751004    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:17.563393    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:17.563553    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:17.575233    4422 logs.go:276] 1 containers: [cab81d1e766d]
	I0731 12:09:17.575295    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:17.586035    4422 logs.go:276] 1 containers: [2cfcfafbf5c8]
	I0731 12:09:17.586114    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:17.596675    4422 logs.go:276] 4 containers: [73a843d71974 c54a5bb3e48e 6ded7784bfc0 4837faa4e3b1]
	I0731 12:09:17.596736    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:17.607851    4422 logs.go:276] 1 containers: [a3b0fb78e2f3]
	I0731 12:09:17.607921    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:17.618251    4422 logs.go:276] 1 containers: [3104c88c8194]
	I0731 12:09:17.618316    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:17.628825    4422 logs.go:276] 1 containers: [5c8f9fc28fb9]
	I0731 12:09:17.628884    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:17.639426    4422 logs.go:276] 0 containers: []
	W0731 12:09:17.639438    4422 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:17.639499    4422 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:17.652632    4422 logs.go:276] 1 containers: [7f016efc1d36]
	I0731 12:09:17.652649    4422 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:17.652654    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:17.657395    4422 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:17.657403    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:17.681893    4422 logs.go:123] Gathering logs for container status ...
	I0731 12:09:17.681905    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:17.696567    4422 logs.go:123] Gathering logs for coredns [73a843d71974] ...
	I0731 12:09:17.696578    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73a843d71974"
	I0731 12:09:17.709102    4422 logs.go:123] Gathering logs for coredns [c54a5bb3e48e] ...
	I0731 12:09:17.709114    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c54a5bb3e48e"
	I0731 12:09:17.720888    4422 logs.go:123] Gathering logs for kube-proxy [3104c88c8194] ...
	I0731 12:09:17.720900    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3104c88c8194"
	I0731 12:09:17.732946    4422 logs.go:123] Gathering logs for kube-controller-manager [5c8f9fc28fb9] ...
	I0731 12:09:17.732958    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c8f9fc28fb9"
	I0731 12:09:17.754100    4422 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:17.754113    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:09:17.787205    4422 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:17.787213    4422 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:17.821339    4422 logs.go:123] Gathering logs for kube-apiserver [cab81d1e766d] ...
	I0731 12:09:17.821351    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cab81d1e766d"
	I0731 12:09:17.836981    4422 logs.go:123] Gathering logs for coredns [6ded7784bfc0] ...
	I0731 12:09:17.836991    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ded7784bfc0"
	I0731 12:09:17.848609    4422 logs.go:123] Gathering logs for kube-scheduler [a3b0fb78e2f3] ...
	I0731 12:09:17.848621    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3b0fb78e2f3"
	I0731 12:09:17.863768    4422 logs.go:123] Gathering logs for etcd [2cfcfafbf5c8] ...
	I0731 12:09:17.863777    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cfcfafbf5c8"
	I0731 12:09:17.877959    4422 logs.go:123] Gathering logs for coredns [4837faa4e3b1] ...
	I0731 12:09:17.877970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4837faa4e3b1"
	I0731 12:09:17.890960    4422 logs.go:123] Gathering logs for storage-provisioner [7f016efc1d36] ...
	I0731 12:09:17.890970    4422 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f016efc1d36"
	I0731 12:09:20.406370    4422 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:18.751643    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:18.751667    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:23.752424    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:23.752463    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:09:24.144959    4588 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:09:24.148355    4588 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:09:25.408777    4422 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:25.416484    4422 out.go:177] 
	W0731 12:09:25.420434    4422 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:09:25.420469    4422 out.go:239] * 
	W0731 12:09:25.422321    4422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:09:25.432284    4422 out.go:177] 
	I0731 12:09:24.155159    4588 addons.go:510] duration metric: took 30.532148666s for enable addons: enabled=[storage-provisioner]
	I0731 12:09:28.753504    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:28.753540    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:33.755045    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:33.755070    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-07-31 19:00:23 UTC, ends at Wed 2024-07-31 19:09:41 UTC. --
	Jul 31 19:09:26 running-upgrade-334000 dockerd[3327]: time="2024-07-31T19:09:26.682706450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 31 19:09:26 running-upgrade-334000 dockerd[3327]: time="2024-07-31T19:09:26.682772990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 31 19:09:26 running-upgrade-334000 dockerd[3327]: time="2024-07-31T19:09:26.682813780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 31 19:09:26 running-upgrade-334000 dockerd[3327]: time="2024-07-31T19:09:26.682892153Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fef51aa9e3e0d1ada147caf548d3a93431e484bc10cfb5f28e28d3b39da0920b pid=19001 runtime=io.containerd.runc.v2
	Jul 31 19:09:27 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:27Z" level=error msg="ContainerStats resp: {0x400075f480 linux}"
	Jul 31 19:09:27 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x4000894880 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x40008949c0 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x4000826ac0 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x40008279c0 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x4000a5e000 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x4000a5e540 linux}"
	Jul 31 19:09:28 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:28Z" level=error msg="ContainerStats resp: {0x4000a5e800 linux}"
	Jul 31 19:09:32 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:09:37 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 31 19:09:38 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:38Z" level=error msg="ContainerStats resp: {0x4000420e80 linux}"
	Jul 31 19:09:38 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:38Z" level=error msg="ContainerStats resp: {0x40008bb000 linux}"
	Jul 31 19:09:39 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:39Z" level=error msg="ContainerStats resp: {0x4000988f40 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075e240 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075e880 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075edc0 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075f1c0 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075f380 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075f940 linux}"
	Jul 31 19:09:40 running-upgrade-334000 cri-dockerd[3172]: time="2024-07-31T19:09:40Z" level=error msg="ContainerStats resp: {0x400075fb00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fef51aa9e3e0d       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   fd35d49bea3fa
	5c25151faba62       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   b2cdbc86e6f71
	73a843d719740       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   fd35d49bea3fa
	c54a5bb3e48e4       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b2cdbc86e6f71
	3104c88c81944       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   6b1433bfc1664
	7f016efc1d367       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a93af74aa9e97
	a3b0fb78e2f3a       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8775cf4c61ef2
	5c8f9fc28fb98       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   8d8ba87796b76
	2cfcfafbf5c80       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   5a46bc55d0184
	cab81d1e766d7       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9ded6fdc321c3
	
	
	==> coredns [5c25151faba6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1896961338221013031.8210138229477785589. HINFO: read udp 10.244.0.3:45710->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1896961338221013031.8210138229477785589. HINFO: read udp 10.244.0.3:53471->10.0.2.3:53: i/o timeout
	
	
	==> coredns [73a843d71974] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:35379->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:40163->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:56429->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:44348->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:51350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:49949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:37042->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:53650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:47169->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8803931558411644719.2869783902074374858. HINFO: read udp 10.244.0.2:45708->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c54a5bb3e48e] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:44916->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:52445->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:34048->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:55149->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:44983->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:53773->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:52929->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:39732->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:42270->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3791888276448593918.2611324127179288686. HINFO: read udp 10.244.0.3:35369->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fef51aa9e3e0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2814861336862013370.9039124696044989699. HINFO: read udp 10.244.0.2:33208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2814861336862013370.9039124696044989699. HINFO: read udp 10.244.0.2:51476->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2814861336862013370.9039124696044989699. HINFO: read udp 10.244.0.2:60348->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-334000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-334000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=running-upgrade-334000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T12_05_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:05:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-334000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:09:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:05:24 +0000   Wed, 31 Jul 2024 19:05:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:05:24 +0000   Wed, 31 Jul 2024 19:05:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:05:24 +0000   Wed, 31 Jul 2024 19:05:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:05:24 +0000   Wed, 31 Jul 2024 19:05:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-334000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 9c4068ba79f9409ba2be74547070bfb9
	  System UUID:                9c4068ba79f9409ba2be74547070bfb9
	  Boot ID:                    1fb6b87a-1ee5-4973-ba5e-ef6dc7423618
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-wgl69                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-zwn9n                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-334000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-334000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-334000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-7kq4n                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-334000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-334000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-334000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-334000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-334000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s   node-controller  Node running-upgrade-334000 event: Registered Node running-upgrade-334000 in Controller
	
	
	==> dmesg <==
	[  +1.371664] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.087030] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.078367] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.140902] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.078610] systemd-fstab-generator[1049]: Ignoring "noauto" for root device
	[  +0.080222] systemd-fstab-generator[1060]: Ignoring "noauto" for root device
	[  +2.129883] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[ +10.144236] systemd-fstab-generator[1928]: Ignoring "noauto" for root device
	[  +2.792770] systemd-fstab-generator[2207]: Ignoring "noauto" for root device
	[  +0.150623] systemd-fstab-generator[2241]: Ignoring "noauto" for root device
	[  +0.078949] systemd-fstab-generator[2252]: Ignoring "noauto" for root device
	[  +0.106425] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[Jul31 19:01] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.215063] systemd-fstab-generator[3128]: Ignoring "noauto" for root device
	[  +0.091176] systemd-fstab-generator[3140]: Ignoring "noauto" for root device
	[  +0.077240] systemd-fstab-generator[3151]: Ignoring "noauto" for root device
	[  +0.104741] systemd-fstab-generator[3165]: Ignoring "noauto" for root device
	[  +2.591228] systemd-fstab-generator[3314]: Ignoring "noauto" for root device
	[  +3.077640] systemd-fstab-generator[3718]: Ignoring "noauto" for root device
	[  +1.304121] systemd-fstab-generator[4009]: Ignoring "noauto" for root device
	[ +17.594468] kauditd_printk_skb: 68 callbacks suppressed
	[Jul31 19:05] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.273250] systemd-fstab-generator[12056]: Ignoring "noauto" for root device
	[  +5.141911] systemd-fstab-generator[12650]: Ignoring "noauto" for root device
	[  +0.454894] systemd-fstab-generator[12779]: Ignoring "noauto" for root device
	
	
	==> etcd [2cfcfafbf5c8] <==
	{"level":"info","ts":"2024-07-31T19:05:20.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-31T19:05:20.299Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-31T19:05:20.300Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:05:20.300Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:05:20.300Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-31T19:05:20.300Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:05:20.300Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:05:20.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T19:05:20.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T19:05:20.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-31T19:05:20.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:05:20.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:05:20.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T19:05:20.797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-31T19:05:20.797Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-334000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:05:20.797Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:05:20.797Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:05:20.798Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:05:20.802Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:09:41 up 9 min,  0 users,  load average: 0.22, 0.29, 0.19
	Linux running-upgrade-334000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [cab81d1e766d] <==
	I0731 19:05:22.064083       1 controller.go:611] quota admission added evaluator for: namespaces
	I0731 19:05:22.084917       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:05:22.084939       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:05:22.093290       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:05:22.098575       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 19:05:22.098651       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 19:05:22.101474       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 19:05:22.824322       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 19:05:23.000795       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 19:05:23.006371       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 19:05:23.006418       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:05:23.148348       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:05:23.158416       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:05:23.269406       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0731 19:05:23.271280       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0731 19:05:23.271639       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 19:05:23.272929       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:05:24.157677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 19:05:24.416935       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 19:05:24.420139       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0731 19:05:24.424932       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 19:05:24.484834       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:05:38.382443       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0731 19:05:38.431663       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0731 19:05:39.217119       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [5c8f9fc28fb9] <==
	I0731 19:05:37.616771       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 19:05:37.622070       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 19:05:37.631923       1 shared_informer.go:262] Caches are synced for job
	I0731 19:05:37.631925       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 19:05:37.635217       1 shared_informer.go:262] Caches are synced for attach detach
	I0731 19:05:37.659046       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 19:05:37.671252       1 shared_informer.go:262] Caches are synced for deployment
	I0731 19:05:37.676580       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0731 19:05:37.680706       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 19:05:37.680711       1 shared_informer.go:262] Caches are synced for PVC protection
	I0731 19:05:37.680716       1 shared_informer.go:262] Caches are synced for ephemeral
	I0731 19:05:37.681554       1 shared_informer.go:262] Caches are synced for HPA
	I0731 19:05:37.681561       1 shared_informer.go:262] Caches are synced for GC
	I0731 19:05:37.730294       1 shared_informer.go:262] Caches are synced for disruption
	I0731 19:05:37.730301       1 disruption.go:371] Sending events to api server.
	I0731 19:05:37.734638       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:05:37.736424       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:05:37.777750       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 19:05:38.150785       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:05:38.230934       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:05:38.230995       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 19:05:38.383564       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0731 19:05:38.434302       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7kq4n"
	I0731 19:05:38.533070       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zwn9n"
	I0731 19:05:38.539021       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wgl69"
	
	
	==> kube-proxy [3104c88c8194] <==
	I0731 19:05:39.205880       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0731 19:05:39.205904       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0731 19:05:39.205913       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 19:05:39.214992       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 19:05:39.215002       1 server_others.go:206] "Using iptables Proxier"
	I0731 19:05:39.215014       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 19:05:39.215101       1 server.go:661] "Version info" version="v1.24.1"
	I0731 19:05:39.215107       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:05:39.215626       1 config.go:444] "Starting node config controller"
	I0731 19:05:39.215631       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 19:05:39.216619       1 config.go:317] "Starting service config controller"
	I0731 19:05:39.216645       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 19:05:39.216654       1 config.go:226] "Starting endpoint slice config controller"
	I0731 19:05:39.216655       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 19:05:39.316383       1 shared_informer.go:262] Caches are synced for node config
	I0731 19:05:39.317613       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 19:05:39.317627       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [a3b0fb78e2f3] <==
	W0731 19:05:22.064666       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 19:05:22.064689       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 19:05:22.064730       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:05:22.064753       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:05:22.064777       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:05:22.064795       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:05:22.064829       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:05:22.064851       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 19:05:22.064877       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:05:22.064894       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:05:22.064929       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:05:22.064950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:05:22.064976       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:05:22.064997       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:05:22.964164       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 19:05:22.964237       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 19:05:23.022472       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 19:05:23.022506       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 19:05:23.066111       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:05:23.066133       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 19:05:23.069760       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:05:23.069819       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:05:23.089733       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:05:23.089821       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0731 19:05:25.260004       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-07-31 19:00:23 UTC, ends at Wed 2024-07-31 19:09:41 UTC. --
	Jul 31 19:05:26 running-upgrade-334000 kubelet[12656]: E0731 19:05:26.650344   12656 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-334000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-334000"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: I0731 19:05:37.584510   12656 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: I0731 19:05:37.585023   12656 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: I0731 19:05:37.622608   12656 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: I0731 19:05:37.684802   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84dsb\" (UniqueName: \"kubernetes.io/projected/f24649d6-4712-4c9a-abd5-12e0360f9286-kube-api-access-84dsb\") pod \"storage-provisioner\" (UID: \"f24649d6-4712-4c9a-abd5-12e0360f9286\") " pod="kube-system/storage-provisioner"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: I0731 19:05:37.684826   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f24649d6-4712-4c9a-abd5-12e0360f9286-tmp\") pod \"storage-provisioner\" (UID: \"f24649d6-4712-4c9a-abd5-12e0360f9286\") " pod="kube-system/storage-provisioner"
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: E0731 19:05:37.787984   12656 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: E0731 19:05:37.788006   12656 projected.go:192] Error preparing data for projected volume kube-api-access-84dsb for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 31 19:05:37 running-upgrade-334000 kubelet[12656]: E0731 19:05:37.788042   12656 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/f24649d6-4712-4c9a-abd5-12e0360f9286-kube-api-access-84dsb podName:f24649d6-4712-4c9a-abd5-12e0360f9286 nodeName:}" failed. No retries permitted until 2024-07-31 19:05:38.288027217 +0000 UTC m=+13.882596523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-84dsb" (UniqueName: "kubernetes.io/projected/f24649d6-4712-4c9a-abd5-12e0360f9286-kube-api-access-84dsb") pod "storage-provisioner" (UID: "f24649d6-4712-4c9a-abd5-12e0360f9286") : configmap "kube-root-ca.crt" not found
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.437179   12656 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.534755   12656 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.542744   12656 topology_manager.go:200] "Topology Admit Handler"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.579545   12656 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a93af74aa9e97acb51cc0cd11f4ae378616fd8a6652b38bb2ea28d74bff36f81"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.590482   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43378ab4-017b-42a0-aeed-0a86a24f314d-kube-proxy\") pod \"kube-proxy-7kq4n\" (UID: \"43378ab4-017b-42a0-aeed-0a86a24f314d\") " pod="kube-system/kube-proxy-7kq4n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.590686   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43378ab4-017b-42a0-aeed-0a86a24f314d-xtables-lock\") pod \"kube-proxy-7kq4n\" (UID: \"43378ab4-017b-42a0-aeed-0a86a24f314d\") " pod="kube-system/kube-proxy-7kq4n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.590727   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43378ab4-017b-42a0-aeed-0a86a24f314d-lib-modules\") pod \"kube-proxy-7kq4n\" (UID: \"43378ab4-017b-42a0-aeed-0a86a24f314d\") " pod="kube-system/kube-proxy-7kq4n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.590753   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgvpl\" (UniqueName: \"kubernetes.io/projected/43378ab4-017b-42a0-aeed-0a86a24f314d-kube-api-access-dgvpl\") pod \"kube-proxy-7kq4n\" (UID: \"43378ab4-017b-42a0-aeed-0a86a24f314d\") " pod="kube-system/kube-proxy-7kq4n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.691701   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1e975a3-1f34-4554-9ec0-768f7be4ad3b-config-volume\") pod \"coredns-6d4b75cb6d-zwn9n\" (UID: \"b1e975a3-1f34-4554-9ec0-768f7be4ad3b\") " pod="kube-system/coredns-6d4b75cb6d-zwn9n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.691733   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a88e815c-7a0e-40a2-9650-f7b02ecbb4d2-config-volume\") pod \"coredns-6d4b75cb6d-wgl69\" (UID: \"a88e815c-7a0e-40a2-9650-f7b02ecbb4d2\") " pod="kube-system/coredns-6d4b75cb6d-wgl69"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.691760   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrg4\" (UniqueName: \"kubernetes.io/projected/b1e975a3-1f34-4554-9ec0-768f7be4ad3b-kube-api-access-dlrg4\") pod \"coredns-6d4b75cb6d-zwn9n\" (UID: \"b1e975a3-1f34-4554-9ec0-768f7be4ad3b\") " pod="kube-system/coredns-6d4b75cb6d-zwn9n"
	Jul 31 19:05:38 running-upgrade-334000 kubelet[12656]: I0731 19:05:38.691771   12656 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwpxk\" (UniqueName: \"kubernetes.io/projected/a88e815c-7a0e-40a2-9650-f7b02ecbb4d2-kube-api-access-dwpxk\") pod \"coredns-6d4b75cb6d-wgl69\" (UID: \"a88e815c-7a0e-40a2-9650-f7b02ecbb4d2\") " pod="kube-system/coredns-6d4b75cb6d-wgl69"
	Jul 31 19:05:39 running-upgrade-334000 kubelet[12656]: I0731 19:05:39.620508   12656 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="fd35d49bea3faef0acbd221459dcd149666ce5453879ce3e52a99ab6947c9f6e"
	Jul 31 19:05:39 running-upgrade-334000 kubelet[12656]: I0731 19:05:39.656662   12656 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b2cdbc86e6f71a55929a2e669d3574336b67f3e78ac520bf96375476e6091f91"
	Jul 31 19:09:27 running-upgrade-334000 kubelet[12656]: I0731 19:09:27.313373   12656 scope.go:110] "RemoveContainer" containerID="4837faa4e3b1cc68243c5a1ade7fb23dde2efbcc9b21bd3d56f08a212615f1f3"
	Jul 31 19:09:27 running-upgrade-334000 kubelet[12656]: I0731 19:09:27.326668   12656 scope.go:110] "RemoveContainer" containerID="6ded7784bfc080c170dae4088785e3a05036d6b65ac9ed176926e0fb29ed335e"
	
	
	==> storage-provisioner [7f016efc1d36] <==
	I0731 19:05:38.737483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 19:05:38.742172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 19:05:38.742194       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 19:05:38.745341       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 19:05:38.745422       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-334000_900b541b-ea3b-43d9-8a4b-6887756e8800!
	I0731 19:05:38.745966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bbf1fa31-6cc8-4aee-9461-58ebf1589f7c", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-334000_900b541b-ea3b-43d9-8a4b-6887756e8800 became leader
	I0731 19:05:38.846340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-334000_900b541b-ea3b-43d9-8a4b-6887756e8800!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-334000 -n running-upgrade-334000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-334000 -n running-upgrade-334000: exit status 2 (15.642770083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-334000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-334000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-334000
--- FAIL: TestRunningBinaryUpgrade (602.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.769526542s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-760000" primary control-plane node in "kubernetes-upgrade-760000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-760000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:02:58.557052    4485 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:02:58.557196    4485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:02:58.557199    4485 out.go:304] Setting ErrFile to fd 2...
	I0731 12:02:58.557202    4485 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:02:58.557315    4485 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:02:58.558330    4485 out.go:298] Setting JSON to false
	I0731 12:02:58.574672    4485 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3747,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:02:58.574740    4485 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:02:58.580968    4485 out.go:177] * [kubernetes-upgrade-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:02:58.588687    4485 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:02:58.588739    4485 notify.go:220] Checking for updates...
	I0731 12:02:58.596865    4485 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:02:58.599809    4485 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:02:58.602854    4485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:02:58.605837    4485 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:02:58.608791    4485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:02:58.612238    4485 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:02:58.612305    4485 config.go:182] Loaded profile config "running-upgrade-334000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:02:58.612353    4485 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:02:58.615863    4485 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:02:58.622819    4485 start.go:297] selected driver: qemu2
	I0731 12:02:58.622826    4485 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:02:58.622834    4485 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:02:58.625028    4485 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:02:58.627803    4485 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:02:58.629126    4485 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 12:02:58.629139    4485 cni.go:84] Creating CNI manager for ""
	I0731 12:02:58.629146    4485 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:02:58.629174    4485 start.go:340] cluster config:
	{Name:kubernetes-upgrade-760000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:02:58.632842    4485 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:02:58.640831    4485 out.go:177] * Starting "kubernetes-upgrade-760000" primary control-plane node in "kubernetes-upgrade-760000" cluster
	I0731 12:02:58.644826    4485 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:02:58.644842    4485 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:02:58.644856    4485 cache.go:56] Caching tarball of preloaded images
	I0731 12:02:58.644917    4485 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:02:58.644923    4485 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:02:58.644982    4485 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kubernetes-upgrade-760000/config.json ...
	I0731 12:02:58.644993    4485 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kubernetes-upgrade-760000/config.json: {Name:mk3c435962a120f10139416ed330d930c1b9db63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:02:58.645331    4485 start.go:360] acquireMachinesLock for kubernetes-upgrade-760000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:02:58.645364    4485 start.go:364] duration metric: took 27.167µs to acquireMachinesLock for "kubernetes-upgrade-760000"
	I0731 12:02:58.645375    4485 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:02:58.645398    4485 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:02:58.653838    4485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:02:58.671485    4485 start.go:159] libmachine.API.Create for "kubernetes-upgrade-760000" (driver="qemu2")
	I0731 12:02:58.671509    4485 client.go:168] LocalClient.Create starting
	I0731 12:02:58.671576    4485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:02:58.671608    4485 main.go:141] libmachine: Decoding PEM data...
	I0731 12:02:58.671619    4485 main.go:141] libmachine: Parsing certificate...
	I0731 12:02:58.671659    4485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:02:58.671681    4485 main.go:141] libmachine: Decoding PEM data...
	I0731 12:02:58.671688    4485 main.go:141] libmachine: Parsing certificate...
	I0731 12:02:58.672107    4485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:02:58.826226    4485 main.go:141] libmachine: Creating SSH key...
	I0731 12:02:58.873540    4485 main.go:141] libmachine: Creating Disk image...
	I0731 12:02:58.873545    4485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:02:58.873785    4485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:02:58.883168    4485 main.go:141] libmachine: STDOUT: 
	I0731 12:02:58.883188    4485 main.go:141] libmachine: STDERR: 
	I0731 12:02:58.883239    4485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2 +20000M
	I0731 12:02:58.891322    4485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:02:58.891335    4485 main.go:141] libmachine: STDERR: 
	I0731 12:02:58.891351    4485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:02:58.891359    4485 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:02:58.891385    4485 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:02:58.891417    4485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:2a:51:23:c5:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:02:58.893146    4485 main.go:141] libmachine: STDOUT: 
	I0731 12:02:58.893160    4485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:02:58.893177    4485 client.go:171] duration metric: took 221.55875ms to LocalClient.Create
	I0731 12:03:00.896245    4485 start.go:128] duration metric: took 2.249829458s to createHost
	I0731 12:03:00.896307    4485 start.go:83] releasing machines lock for "kubernetes-upgrade-760000", held for 2.24994075s
	W0731 12:03:00.896432    4485 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:03:00.909348    4485 out.go:177] * Deleting "kubernetes-upgrade-760000" in qemu2 ...
	W0731 12:03:00.932692    4485 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:03:00.932717    4485 start.go:729] Will try again in 5 seconds ...
	I0731 12:03:05.936693    4485 start.go:360] acquireMachinesLock for kubernetes-upgrade-760000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:03:05.937195    4485 start.go:364] duration metric: took 394.708µs to acquireMachinesLock for "kubernetes-upgrade-760000"
	I0731 12:03:05.937267    4485 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:03:05.937536    4485 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:03:05.948243    4485 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:03:06.000700    4485 start.go:159] libmachine.API.Create for "kubernetes-upgrade-760000" (driver="qemu2")
	I0731 12:03:06.000760    4485 client.go:168] LocalClient.Create starting
	I0731 12:03:06.000906    4485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:03:06.000979    4485 main.go:141] libmachine: Decoding PEM data...
	I0731 12:03:06.001001    4485 main.go:141] libmachine: Parsing certificate...
	I0731 12:03:06.001068    4485 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:03:06.001120    4485 main.go:141] libmachine: Decoding PEM data...
	I0731 12:03:06.001146    4485 main.go:141] libmachine: Parsing certificate...
	I0731 12:03:06.001668    4485 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:03:06.163224    4485 main.go:141] libmachine: Creating SSH key...
	I0731 12:03:06.246945    4485 main.go:141] libmachine: Creating Disk image...
	I0731 12:03:06.246951    4485 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:03:06.247167    4485 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:03:06.256363    4485 main.go:141] libmachine: STDOUT: 
	I0731 12:03:06.256380    4485 main.go:141] libmachine: STDERR: 
	I0731 12:03:06.256443    4485 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2 +20000M
	I0731 12:03:06.264389    4485 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:03:06.264405    4485 main.go:141] libmachine: STDERR: 
	I0731 12:03:06.264418    4485 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:03:06.264423    4485 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:03:06.264440    4485 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:03:06.264472    4485 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ab:35:62:92:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:03:06.266013    4485 main.go:141] libmachine: STDOUT: 
	I0731 12:03:06.266025    4485 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:03:06.266043    4485 client.go:171] duration metric: took 265.197791ms to LocalClient.Create
	I0731 12:03:08.267534    4485 start.go:128] duration metric: took 2.329300958s to createHost
	I0731 12:03:08.267565    4485 start.go:83] releasing machines lock for "kubernetes-upgrade-760000", held for 2.32971825s
	W0731 12:03:08.267705    4485 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-760000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:03:08.278018    4485 out.go:177] 
	W0731 12:03:08.281052    4485 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:03:08.281061    4485 out.go:239] * 
	* 
	W0731 12:03:08.281860    4485 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:03:08.293988    4485 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-760000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-760000: (3.299798042s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-760000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-760000 status --format={{.Host}}: exit status 7 (62.546375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.18703175s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-760000" primary control-plane node in "kubernetes-upgrade-760000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-760000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:03:11.698523    4535 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:03:11.698682    4535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:03:11.698686    4535 out.go:304] Setting ErrFile to fd 2...
	I0731 12:03:11.698688    4535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:03:11.698818    4535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:03:11.699870    4535 out.go:298] Setting JSON to false
	I0731 12:03:11.715984    4535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3760,"bootTime":1722448831,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:03:11.716059    4535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:03:11.721136    4535 out.go:177] * [kubernetes-upgrade-760000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:03:11.727930    4535 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:03:11.727983    4535 notify.go:220] Checking for updates...
	I0731 12:03:11.736044    4535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:03:11.739037    4535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:03:11.742999    4535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:03:11.746026    4535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:03:11.748975    4535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:03:11.752254    4535 config.go:182] Loaded profile config "kubernetes-upgrade-760000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:03:11.752545    4535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:03:11.756010    4535 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:03:11.763053    4535 start.go:297] selected driver: qemu2
	I0731 12:03:11.763059    4535 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-760000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:03:11.763171    4535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:03:11.765371    4535 cni.go:84] Creating CNI manager for ""
	I0731 12:03:11.765389    4535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:03:11.765405    4535 start.go:340] cluster config:
	{Name:kubernetes-upgrade-760000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-760000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:03:11.768662    4535 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:03:11.777011    4535 out.go:177] * Starting "kubernetes-upgrade-760000" primary control-plane node in "kubernetes-upgrade-760000" cluster
	I0731 12:03:11.781047    4535 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:03:11.781067    4535 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:03:11.781081    4535 cache.go:56] Caching tarball of preloaded images
	I0731 12:03:11.781158    4535 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:03:11.781165    4535 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:03:11.781256    4535 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kubernetes-upgrade-760000/config.json ...
	I0731 12:03:11.781748    4535 start.go:360] acquireMachinesLock for kubernetes-upgrade-760000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:03:11.781774    4535 start.go:364] duration metric: took 21.041µs to acquireMachinesLock for "kubernetes-upgrade-760000"
	I0731 12:03:11.781782    4535 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:03:11.781788    4535 fix.go:54] fixHost starting: 
	I0731 12:03:11.781898    4535 fix.go:112] recreateIfNeeded on kubernetes-upgrade-760000: state=Stopped err=<nil>
	W0731 12:03:11.781905    4535 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:03:11.790069    4535 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-760000" ...
	I0731 12:03:11.793890    4535 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:03:11.793921    4535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ab:35:62:92:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:03:11.795716    4535 main.go:141] libmachine: STDOUT: 
	I0731 12:03:11.795733    4535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:03:11.795759    4535 fix.go:56] duration metric: took 13.9685ms for fixHost
	I0731 12:03:11.795763    4535 start.go:83] releasing machines lock for "kubernetes-upgrade-760000", held for 13.982417ms
	W0731 12:03:11.795769    4535 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:03:11.795801    4535 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:03:11.795806    4535 start.go:729] Will try again in 5 seconds ...
	I0731 12:03:16.798837    4535 start.go:360] acquireMachinesLock for kubernetes-upgrade-760000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:03:16.799290    4535 start.go:364] duration metric: took 337.833µs to acquireMachinesLock for "kubernetes-upgrade-760000"
	I0731 12:03:16.799433    4535 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:03:16.799452    4535 fix.go:54] fixHost starting: 
	I0731 12:03:16.800107    4535 fix.go:112] recreateIfNeeded on kubernetes-upgrade-760000: state=Stopped err=<nil>
	W0731 12:03:16.800130    4535 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:03:16.804559    4535 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-760000" ...
	I0731 12:03:16.809524    4535 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:03:16.809731    4535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:ab:35:62:92:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubernetes-upgrade-760000/disk.qcow2
	I0731 12:03:16.817736    4535 main.go:141] libmachine: STDOUT: 
	I0731 12:03:16.817803    4535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:03:16.817864    4535 fix.go:56] duration metric: took 18.410875ms for fixHost
	I0731 12:03:16.817885    4535 start.go:83] releasing machines lock for "kubernetes-upgrade-760000", held for 18.569375ms
	W0731 12:03:16.818101    4535 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-760000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:03:16.826531    4535 out.go:177] 
	W0731 12:03:16.830413    4535 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:03:16.830446    4535 out.go:239] * 
	* 
	W0731 12:03:16.831811    4535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:03:16.843535    4535 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-760000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-760000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-760000 version --output=json: exit status 1 (52.999958ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-760000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-31 12:03:16.910226 -0700 PDT m=+2956.706923835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-760000 -n kubernetes-upgrade-760000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-760000 -n kubernetes-upgrade-760000: exit status 7 (30.448833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-760000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-760000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-760000
--- FAIL: TestKubernetesUpgrade (18.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19356
- KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current352006303/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19356
- KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2306760431/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1451202699 start -p stopped-upgrade-532000 --memory=2200 --vm-driver=qemu2 
E0731 12:03:47.043972    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1451202699 start -p stopped-upgrade-532000 --memory=2200 --vm-driver=qemu2 : (40.949274125s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1451202699 -p stopped-upgrade-532000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1451202699 -p stopped-upgrade-532000 stop: (12.111720959s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0731 12:06:45.941458    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 12:08:47.039546    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m46.708523708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-532000" primary control-plane node in "stopped-upgrade-532000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-532000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:04:11.608738    4588 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:04:11.608909    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:04:11.608914    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:04:11.608917    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:04:11.609077    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:04:11.610276    4588 out.go:298] Setting JSON to false
	I0731 12:04:11.631377    4588 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3820,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:04:11.631446    4588 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:04:11.636064    4588 out.go:177] * [stopped-upgrade-532000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:04:11.643003    4588 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:04:11.643048    4588 notify.go:220] Checking for updates...
	I0731 12:04:11.651017    4588 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:04:11.654085    4588 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:04:11.657899    4588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:04:11.661018    4588 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:04:11.664068    4588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:04:11.665700    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:04:11.668962    4588 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:04:11.672045    4588 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:04:11.673767    4588 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:04:11.681012    4588 start.go:297] selected driver: qemu2
	I0731 12:04:11.681017    4588 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:11.681067    4588 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:04:11.683382    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:04:11.683398    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:04:11.683424    4588 start.go:340] cluster config:
	{Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:11.683478    4588 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:04:11.692036    4588 out.go:177] * Starting "stopped-upgrade-532000" primary control-plane node in "stopped-upgrade-532000" cluster
	I0731 12:04:11.696014    4588 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:04:11.696027    4588 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0731 12:04:11.696036    4588 cache.go:56] Caching tarball of preloaded images
	I0731 12:04:11.696091    4588 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:04:11.696096    4588 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0731 12:04:11.696137    4588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/config.json ...
	I0731 12:04:11.696629    4588 start.go:360] acquireMachinesLock for stopped-upgrade-532000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:04:11.696663    4588 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "stopped-upgrade-532000"
	I0731 12:04:11.696670    4588 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:04:11.696676    4588 fix.go:54] fixHost starting: 
	I0731 12:04:11.696777    4588 fix.go:112] recreateIfNeeded on stopped-upgrade-532000: state=Stopped err=<nil>
	W0731 12:04:11.696785    4588 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:04:11.700995    4588 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-532000" ...
	I0731 12:04:11.708937    4588 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:04:11.709003    4588 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50472-:22,hostfwd=tcp::50473-:2376,hostname=stopped-upgrade-532000 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/disk.qcow2
	I0731 12:04:11.757586    4588 main.go:141] libmachine: STDOUT: 
	I0731 12:04:11.757632    4588 main.go:141] libmachine: STDERR: 
	I0731 12:04:11.757639    4588 main.go:141] libmachine: Waiting for VM to start (ssh -p 50472 docker@127.0.0.1)...
	I0731 12:04:32.149649    4588 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/config.json ...
	I0731 12:04:32.150465    4588 machine.go:94] provisionDockerMachine start ...
	I0731 12:04:32.150652    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.150964    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.150976    4588 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 12:04:32.237443    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 12:04:32.237473    4588 buildroot.go:166] provisioning hostname "stopped-upgrade-532000"
	I0731 12:04:32.237558    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.237803    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.237814    4588 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-532000 && echo "stopped-upgrade-532000" | sudo tee /etc/hostname
	I0731 12:04:32.321278    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-532000
	
	I0731 12:04:32.321371    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.321563    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.321574    4588 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-532000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-532000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-532000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 12:04:32.391604    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 12:04:32.391616    4588 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19356-1202/.minikube CaCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19356-1202/.minikube}
	I0731 12:04:32.391630    4588 buildroot.go:174] setting up certificates
	I0731 12:04:32.391636    4588 provision.go:84] configureAuth start
	I0731 12:04:32.391642    4588 provision.go:143] copyHostCerts
	I0731 12:04:32.391729    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem, removing ...
	I0731 12:04:32.391736    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem
	I0731 12:04:32.391842    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/cert.pem (1123 bytes)
	I0731 12:04:32.392024    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem, removing ...
	I0731 12:04:32.392030    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem
	I0731 12:04:32.392086    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/key.pem (1679 bytes)
	I0731 12:04:32.392198    4588 exec_runner.go:144] found /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem, removing ...
	I0731 12:04:32.392202    4588 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem
	I0731 12:04:32.392254    4588 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.pem (1082 bytes)
	I0731 12:04:32.392340    4588 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-532000 san=[127.0.0.1 localhost minikube stopped-upgrade-532000]
	I0731 12:04:32.513592    4588 provision.go:177] copyRemoteCerts
	I0731 12:04:32.513636    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 12:04:32.513644    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:32.550182    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 12:04:32.557109    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 12:04:32.563651    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 12:04:32.570855    4588 provision.go:87] duration metric: took 179.21775ms to configureAuth
	I0731 12:04:32.570865    4588 buildroot.go:189] setting minikube options for container-runtime
	I0731 12:04:32.570969    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:04:32.571005    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.571096    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.571102    4588 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0731 12:04:32.637396    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0731 12:04:32.637404    4588 buildroot.go:70] root file system type: tmpfs
	I0731 12:04:32.637471    4588 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0731 12:04:32.637522    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.637633    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.637668    4588 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0731 12:04:32.709924    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0731 12:04:32.709968    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:32.710083    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:32.710099    4588 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0731 12:04:33.086234    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0731 12:04:33.086269    4588 machine.go:97] duration metric: took 935.808291ms to provisionDockerMachine
	I0731 12:04:33.086275    4588 start.go:293] postStartSetup for "stopped-upgrade-532000" (driver="qemu2")
	I0731 12:04:33.086282    4588 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 12:04:33.086328    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 12:04:33.086338    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:33.122843    4588 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 12:04:33.124388    4588 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 12:04:33.124395    4588 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/addons for local assets ...
	I0731 12:04:33.124486    4588 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19356-1202/.minikube/files for local assets ...
	I0731 12:04:33.124612    4588 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem -> 17012.pem in /etc/ssl/certs
	I0731 12:04:33.124745    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 12:04:33.127408    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:04:33.134674    4588 start.go:296] duration metric: took 48.394041ms for postStartSetup
	I0731 12:04:33.134686    4588 fix.go:56] duration metric: took 21.438302958s for fixHost
	I0731 12:04:33.134725    4588 main.go:141] libmachine: Using SSH client type: native
	I0731 12:04:33.134822    4588 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10283ea10] 0x102841270 <nil>  [] 0s} localhost 50472 <nil> <nil>}
	I0731 12:04:33.134826    4588 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 12:04:33.200388    4588 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722452673.609304963
	
	I0731 12:04:33.200401    4588 fix.go:216] guest clock: 1722452673.609304963
	I0731 12:04:33.200405    4588 fix.go:229] Guest: 2024-07-31 12:04:33.609304963 -0700 PDT Remote: 2024-07-31 12:04:33.134688 -0700 PDT m=+21.556032084 (delta=474.616963ms)
	I0731 12:04:33.200417    4588 fix.go:200] guest clock delta is within tolerance: 474.616963ms
	I0731 12:04:33.200420    4588 start.go:83] releasing machines lock for "stopped-upgrade-532000", held for 21.504044084s
	I0731 12:04:33.200493    4588 ssh_runner.go:195] Run: cat /version.json
	I0731 12:04:33.200495    4588 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 12:04:33.200502    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:04:33.200515    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	W0731 12:04:33.201088    4588 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:50594->127.0.0.1:50472: read: connection reset by peer
	I0731 12:04:33.201107    4588 retry.go:31] will retry after 343.569647ms: ssh: handshake failed: read tcp 127.0.0.1:50594->127.0.0.1:50472: read: connection reset by peer
	W0731 12:04:33.604639    4588 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 12:04:33.604862    4588 ssh_runner.go:195] Run: systemctl --version
	I0731 12:04:33.609384    4588 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 12:04:33.613545    4588 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 12:04:33.613627    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0731 12:04:33.619997    4588 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0731 12:04:33.629189    4588 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 12:04:33.629204    4588 start.go:495] detecting cgroup driver to use...
	I0731 12:04:33.629357    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:04:33.640482    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0731 12:04:33.645171    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0731 12:04:33.649142    4588 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0731 12:04:33.649189    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0731 12:04:33.652807    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:04:33.656367    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0731 12:04:33.660011    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0731 12:04:33.663471    4588 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 12:04:33.667158    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0731 12:04:33.670524    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0731 12:04:33.673387    4588 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0731 12:04:33.676210    4588 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 12:04:33.679190    4588 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 12:04:33.682084    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:33.762920    4588 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0731 12:04:33.769442    4588 start.go:495] detecting cgroup driver to use...
	I0731 12:04:33.769510    4588 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0731 12:04:33.775211    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:04:33.780076    4588 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 12:04:33.787242    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 12:04:33.792216    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:04:33.796694    4588 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0731 12:04:33.867067    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0731 12:04:33.872100    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 12:04:33.877273    4588 ssh_runner.go:195] Run: which cri-dockerd
	I0731 12:04:33.878504    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0731 12:04:33.881205    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0731 12:04:33.886110    4588 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0731 12:04:33.975722    4588 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0731 12:04:34.037586    4588 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0731 12:04:34.037650    4588 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0731 12:04:34.042773    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:34.119900    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:04:35.272574    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.152674042s)
	I0731 12:04:35.272628    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0731 12:04:35.277067    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:04:35.281583    4588 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0731 12:04:35.360309    4588 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0731 12:04:35.440470    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:35.525238    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0731 12:04:35.530911    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0731 12:04:35.535586    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:35.618495    4588 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0731 12:04:35.657447    4588 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0731 12:04:35.657524    4588 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0731 12:04:35.659500    4588 start.go:563] Will wait 60s for crictl version
	I0731 12:04:35.659553    4588 ssh_runner.go:195] Run: which crictl
	I0731 12:04:35.661162    4588 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 12:04:35.675176    4588 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0731 12:04:35.675245    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:04:35.693787    4588 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0731 12:04:35.713518    4588 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0731 12:04:35.713644    4588 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0731 12:04:35.714839    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:04:35.718610    4588 kubeadm.go:883] updating cluster {Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 12:04:35.718651    4588 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0731 12:04:35.718692    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:04:35.729499    4588 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:04:35.729507    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:04:35.729552    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:04:35.732518    4588 ssh_runner.go:195] Run: which lz4
	I0731 12:04:35.733771    4588 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 12:04:35.735066    4588 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 12:04:35.735076    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0731 12:04:36.627794    4588 docker.go:649] duration metric: took 894.061917ms to copy over tarball
	I0731 12:04:36.627846    4588 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 12:04:37.785663    4588 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157820958s)
	I0731 12:04:37.785677    4588 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 12:04:37.800881    4588 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0731 12:04:37.803871    4588 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0731 12:04:37.809222    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:37.891213    4588 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0731 12:04:39.413455    4588 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.522248958s)
	I0731 12:04:39.413572    4588 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0731 12:04:39.423995    4588 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0731 12:04:39.424006    4588 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0731 12:04:39.424010    4588 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 12:04:39.429125    4588 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:39.431074    4588 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.432604    4588 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.432751    4588 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:39.434479    4588 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.434507    4588 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.436010    4588 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.436086    4588 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.437076    4588 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.437090    4588 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.438081    4588 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.438182    4588 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.438937    4588 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 12:04:39.439048    4588 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.439929    4588 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.440470    4588 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 12:04:39.880101    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.888410    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.891328    4588 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0731 12:04:39.891351    4588 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.891401    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 12:04:39.904392    4588 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0731 12:04:39.904403    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 12:04:39.904413    4588 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.904459    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 12:04:39.911716    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.915030    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 12:04:39.915889    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.920546    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 12:04:39.921977    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.928997    4588 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0731 12:04:39.929020    4588 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 12:04:39.929074    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	W0731 12:04:39.935980    4588 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0731 12:04:39.936145    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.941107    4588 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0731 12:04:39.941132    4588 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.941187    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 12:04:39.946905    4588 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0731 12:04:39.946925    4588 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.946976    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0731 12:04:39.948756    4588 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0731 12:04:39.948769    4588 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0731 12:04:39.948799    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0731 12:04:39.958374    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 12:04:39.970730    4588 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0731 12:04:39.970754    4588 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.970810    4588 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 12:04:39.971753    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0731 12:04:39.971815    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 12:04:39.971859    4588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:04:39.977123    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0731 12:04:39.977239    4588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0731 12:04:39.983825    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 12:04:39.983839    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 12:04:39.983850    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0731 12:04:39.983850    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0731 12:04:39.984011    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 12:04:39.984094    4588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:04:39.986503    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 12:04:39.986516    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0731 12:04:40.010541    4588 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 12:04:40.010556    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0731 12:04:40.084781    4588 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0731 12:04:40.084889    4588 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.085900    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0731 12:04:40.096417    4588 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 12:04:40.096488    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0731 12:04:40.127484    4588 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0731 12:04:40.127509    4588 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.127574    4588 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:04:40.221038    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 12:04:40.221106    4588 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0731 12:04:40.221213    4588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:04:40.233285    4588 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0731 12:04:40.233315    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0731 12:04:40.299277    4588 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0731 12:04:40.299292    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0731 12:04:40.621357    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0731 12:04:40.621380    4588 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 12:04:40.621385    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0731 12:04:40.778715    4588 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 12:04:40.778756    4588 cache_images.go:92] duration metric: took 1.354760125s to LoadCachedImages
	W0731 12:04:40.778800    4588 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0731 12:04:40.778805    4588 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0731 12:04:40.778867    4588 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-532000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 12:04:40.778936    4588 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0731 12:04:40.792906    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:04:40.792917    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:04:40.792924    4588 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 12:04:40.792933    4588 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-532000 NodeName:stopped-upgrade-532000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 12:04:40.792993    4588 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-532000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 12:04:40.793048    4588 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0731 12:04:40.795811    4588 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 12:04:40.795834    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 12:04:40.798699    4588 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0731 12:04:40.803615    4588 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 12:04:40.808733    4588 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0731 12:04:40.813836    4588 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0731 12:04:40.814976    4588 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 12:04:40.818832    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:04:40.894784    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:04:40.901354    4588 certs.go:68] Setting up /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000 for IP: 10.0.2.15
	I0731 12:04:40.901368    4588 certs.go:194] generating shared ca certs ...
	I0731 12:04:40.901377    4588 certs.go:226] acquiring lock for ca certs: {Name:mkf42ffcc2bf4238c4563b7710ee6f745a9fc0bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:40.901566    4588 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key
	I0731 12:04:40.901621    4588 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key
	I0731 12:04:40.901628    4588 certs.go:256] generating profile certs ...
	I0731 12:04:40.901696    4588 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key
	I0731 12:04:40.901716    4588 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741
	I0731 12:04:40.901729    4588 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0731 12:04:41.091849    4588 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 ...
	I0731 12:04:41.091864    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741: {Name:mk4631f82fd7195a71dca1562372b13c69979a2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.092158    4588 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741 ...
	I0731 12:04:41.092164    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741: {Name:mk8f0693cc4cbd008d7e5e97e68b7d08bcead493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.092309    4588 certs.go:381] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt.5d550741 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt
	I0731 12:04:41.092457    4588 certs.go:385] copying /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key.5d550741 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key
	I0731 12:04:41.092623    4588 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.key
	I0731 12:04:41.092756    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem (1338 bytes)
	W0731 12:04:41.092788    4588 certs.go:480] ignoring /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701_empty.pem, impossibly tiny 0 bytes
	I0731 12:04:41.092795    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 12:04:41.092814    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem (1082 bytes)
	I0731 12:04:41.092832    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem (1123 bytes)
	I0731 12:04:41.092850    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/key.pem (1679 bytes)
	I0731 12:04:41.092887    4588 certs.go:484] found cert: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem (1708 bytes)
	I0731 12:04:41.093226    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 12:04:41.100213    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 12:04:41.107309    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 12:04:41.114561    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0731 12:04:41.122760    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0731 12:04:41.130321    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 12:04:41.137627    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 12:04:41.144539    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 12:04:41.151537    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 12:04:41.158806    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/1701.pem --> /usr/share/ca-certificates/1701.pem (1338 bytes)
	I0731 12:04:41.166091    4588 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/ssl/certs/17012.pem --> /usr/share/ca-certificates/17012.pem (1708 bytes)
	I0731 12:04:41.172759    4588 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 12:04:41.177684    4588 ssh_runner.go:195] Run: openssl version
	I0731 12:04:41.179486    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 12:04:41.182842    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.184405    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:14 /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.184424    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 12:04:41.186223    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 12:04:41.189057    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1701.pem && ln -fs /usr/share/ca-certificates/1701.pem /etc/ssl/certs/1701.pem"
	I0731 12:04:41.191931    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.193434    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:21 /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.193455    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1701.pem
	I0731 12:04:41.195271    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1701.pem /etc/ssl/certs/51391683.0"
	I0731 12:04:41.198805    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17012.pem && ln -fs /usr/share/ca-certificates/17012.pem /etc/ssl/certs/17012.pem"
	I0731 12:04:41.201935    4588 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.203262    4588 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:21 /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.203278    4588 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17012.pem
	I0731 12:04:41.205106    4588 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17012.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 12:04:41.207959    4588 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 12:04:41.209537    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 12:04:41.211426    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 12:04:41.213497    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 12:04:41.216114    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 12:04:41.218068    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 12:04:41.219889    4588 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 12:04:41.221783    4588 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50507 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 12:04:41.221850    4588 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:04:41.231790    4588 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 12:04:41.235328    4588 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 12:04:41.235333    4588 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 12:04:41.235355    4588 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 12:04:41.238573    4588 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 12:04:41.238868    4588 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-532000" does not appear in /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:04:41.238962    4588 kubeconfig.go:62] /Users/jenkins/minikube-integration/19356-1202/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-532000" cluster setting kubeconfig missing "stopped-upgrade-532000" context setting]
	I0731 12:04:41.239151    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:04:41.239573    4588 kapi.go:59] client config for stopped-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bd41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:04:41.239882    4588 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 12:04:41.242602    4588 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-532000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0731 12:04:41.242607    4588 kubeadm.go:1160] stopping kube-system containers ...
	I0731 12:04:41.242651    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0731 12:04:41.253229    4588 docker.go:483] Stopping containers: [4723c1374220 b381bfc4361b ec713c22bdd4 96caf573f6dd 1afdb28c0dae 250b8cef76fb 0ee55201c776 e533f78e771c]
	I0731 12:04:41.253288    4588 ssh_runner.go:195] Run: docker stop 4723c1374220 b381bfc4361b ec713c22bdd4 96caf573f6dd 1afdb28c0dae 250b8cef76fb 0ee55201c776 e533f78e771c
	I0731 12:04:41.263990    4588 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 12:04:41.269496    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:04:41.272102    4588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:04:41.272107    4588 kubeadm.go:157] found existing configuration files:
	
	I0731 12:04:41.272125    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf
	I0731 12:04:41.275218    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:04:41.275239    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:04:41.278455    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf
	I0731 12:04:41.280959    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:04:41.280982    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:04:41.283713    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf
	I0731 12:04:41.286841    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:04:41.286864    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:04:41.289743    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf
	I0731 12:04:41.292205    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:04:41.292225    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:04:41.295279    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:04:41.298180    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:41.321436    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:41.848180    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:41.984866    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:42.007769    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 12:04:42.032822    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:04:42.032900    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:42.533159    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:43.034947    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:04:43.039284    4588 api_server.go:72] duration metric: took 1.006472792s to wait for apiserver process to appear ...
	I0731 12:04:43.039294    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:04:43.039302    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:48.041339    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:48.041406    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:53.041693    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:53.041761    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:04:58.042272    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:04:58.042294    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:03.042645    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:03.042739    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:08.043548    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:08.043612    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:13.044743    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:13.044798    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:18.046090    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:18.046114    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:23.047568    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:23.047589    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:28.048242    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:28.048340    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:33.050900    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:33.050933    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:38.053102    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:38.053149    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:43.055296    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:43.055468    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:43.071365    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:43.071445    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:43.083965    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:43.084043    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:43.099703    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:43.099787    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:43.110052    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:43.110125    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:43.120929    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:43.121002    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:43.133225    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:43.133306    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:43.148976    4588 logs.go:276] 0 containers: []
	W0731 12:05:43.148989    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:43.149045    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:43.159646    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:43.159674    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:43.159682    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:43.165264    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:43.165274    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:43.182996    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:43.183012    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:43.199114    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:43.199126    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:43.213130    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:43.213147    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:43.250383    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:43.250401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:43.264253    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:43.264268    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:43.276460    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:43.276469    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:43.287912    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:43.287926    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:43.311421    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:43.311428    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:43.394932    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:43.394944    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:05:43.438209    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:43.438230    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:43.450557    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:43.450569    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:43.467529    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:43.467539    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:43.478947    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:43.478958    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:43.493865    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:43.493875    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:43.505327    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:43.505339    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:46.019995    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:51.020985    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:51.021140    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:51.038434    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:51.038523    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:51.049350    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:51.049424    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:51.059852    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:51.059916    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:51.071010    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:51.071082    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:51.081380    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:51.081443    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:51.095615    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:51.095676    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:51.106053    4588 logs.go:276] 0 containers: []
	W0731 12:05:51.106064    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:51.106122    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:51.117736    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:51.117758    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:51.117763    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:51.130633    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:51.130646    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:51.142217    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:51.142228    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:05:51.181617    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:51.181633    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:51.197419    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:51.197432    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:51.217249    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:51.217261    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:51.230436    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:51.230450    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:51.244335    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:51.244345    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:51.263622    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:51.263636    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:51.288367    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:51.288376    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:51.300146    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:51.300157    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:51.304540    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:51.304547    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:51.341118    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:51.341130    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:51.360563    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:51.360574    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:51.375886    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:51.375897    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:51.387546    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:51.387559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:51.401703    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:51.401712    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:53.940587    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:05:58.943007    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:05:58.943491    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:05:58.982512    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:05:58.982675    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:05:59.003305    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:05:59.003449    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:05:59.020896    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:05:59.020965    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:05:59.033877    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:05:59.033949    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:05:59.044631    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:05:59.044710    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:05:59.054646    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:05:59.054722    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:05:59.064792    4588 logs.go:276] 0 containers: []
	W0731 12:05:59.064806    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:05:59.064872    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:05:59.075487    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:05:59.075504    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:05:59.075509    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:05:59.092964    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:05:59.092975    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:05:59.131543    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:05:59.131553    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:05:59.146654    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:05:59.146665    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:05:59.158764    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:05:59.158776    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:05:59.174689    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:05:59.174702    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:05:59.199756    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:05:59.199763    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:05:59.211082    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:05:59.211093    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:05:59.245585    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:05:59.245596    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:05:59.257440    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:05:59.257451    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:05:59.268976    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:05:59.268986    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:05:59.281909    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:05:59.281921    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:05:59.286170    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:05:59.286179    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:05:59.299623    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:05:59.299632    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:05:59.314550    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:05:59.314559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:05:59.326703    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:05:59.326716    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:05:59.340603    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:05:59.340613    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:01.880848    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:06.883125    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:06.883274    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:06.897599    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:06.897679    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:06.909260    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:06.909337    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:06.919410    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:06.919470    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:06.930112    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:06.930176    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:06.941373    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:06.941440    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:06.952058    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:06.952119    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:06.961731    4588 logs.go:276] 0 containers: []
	W0731 12:06:06.961743    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:06.961807    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:06.972093    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:06.972111    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:06.972116    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:06.976791    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:06.976799    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:07.017561    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:07.017576    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:07.035732    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:07.035745    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:07.048127    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:07.048142    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:07.066933    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:07.066949    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:07.092530    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:07.092540    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:07.104518    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:07.104528    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:07.143243    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:07.143254    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:07.179924    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:07.179939    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:07.195018    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:07.195032    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:07.208163    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:07.208173    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:07.219650    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:07.219659    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:07.233462    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:07.233472    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:07.247302    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:07.247319    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:07.266696    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:07.266708    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:07.280212    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:07.280223    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:09.793302    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:14.795465    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:14.795586    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:14.808729    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:14.808802    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:14.819040    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:14.819105    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:14.829767    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:14.829834    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:14.840680    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:14.840747    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:14.851072    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:14.851143    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:14.861009    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:14.861073    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:14.871667    4588 logs.go:276] 0 containers: []
	W0731 12:06:14.871678    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:14.871736    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:14.882266    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:14.882282    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:14.882288    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:14.886478    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:14.886485    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:14.910802    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:14.910812    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:14.924505    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:14.924518    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:14.936110    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:14.936123    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:14.961204    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:14.961218    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:14.974802    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:14.974816    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:14.988505    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:14.988519    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:14.999970    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:14.999982    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:15.012002    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:15.012016    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:15.049060    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:15.049072    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:15.083558    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:15.083571    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:15.120413    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:15.120423    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:15.135780    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:15.135792    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:15.147731    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:15.147741    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:15.163717    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:15.163728    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:15.175812    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:15.175823    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:17.689210    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:22.691452    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:22.691613    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:22.702971    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:22.703046    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:22.713624    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:22.713697    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:22.723992    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:22.724061    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:22.734500    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:22.734571    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:22.745542    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:22.745611    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:22.756436    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:22.756514    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:22.770966    4588 logs.go:276] 0 containers: []
	W0731 12:06:22.770978    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:22.771037    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:22.783850    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:22.783870    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:22.783875    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:22.794925    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:22.794935    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:22.820179    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:22.820189    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:22.837792    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:22.837805    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:22.854874    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:22.854884    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:22.877558    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:22.877569    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:22.891506    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:22.891517    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:22.930030    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:22.930042    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:22.942461    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:22.942475    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:22.954162    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:22.954174    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:22.992438    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:22.992446    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:22.996783    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:22.996788    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:23.010595    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:23.010608    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:23.022148    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:23.022159    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:23.039230    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:23.039241    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:23.075138    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:23.075149    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:23.086195    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:23.086206    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:25.601178    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:30.603480    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:30.603727    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:30.623959    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:30.624053    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:30.639198    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:30.639270    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:30.651419    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:30.651495    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:30.662777    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:30.662848    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:30.673308    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:30.673379    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:30.686410    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:30.686482    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:30.696349    4588 logs.go:276] 0 containers: []
	W0731 12:06:30.696361    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:30.696421    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:30.706954    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:30.706972    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:30.706978    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:30.744323    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:30.744333    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:30.756250    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:30.756261    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:30.767317    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:30.767330    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:30.804791    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:30.804801    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:30.818723    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:30.818734    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:30.831230    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:30.831241    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:30.836002    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:30.836007    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:30.870142    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:30.870155    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:30.885403    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:30.885416    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:30.898752    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:30.898764    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:30.910235    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:30.910251    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:30.933837    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:30.933844    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:30.946182    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:30.946194    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:30.961077    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:30.961087    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:30.972331    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:30.972345    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:30.989044    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:30.989055    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:33.508818    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:38.511199    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:38.511357    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:38.527760    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:38.527845    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:38.540953    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:38.541029    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:38.551986    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:38.552047    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:38.562665    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:38.562738    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:38.572904    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:38.572974    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:38.583405    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:38.583472    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:38.593509    4588 logs.go:276] 0 containers: []
	W0731 12:06:38.593519    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:38.593573    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:38.606793    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:38.606810    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:38.606816    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:38.618865    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:38.618875    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:38.623721    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:38.623728    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:38.665276    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:38.665286    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:38.679872    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:38.679882    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:38.690685    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:38.690696    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:38.702146    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:38.702156    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:38.713501    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:38.713512    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:38.736876    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:38.736883    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:38.752289    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:38.752303    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:38.766294    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:38.766303    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:38.777391    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:38.777401    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:38.815656    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:38.815664    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:38.850786    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:38.850797    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:38.865636    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:38.865646    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:38.879665    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:38.879676    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:38.891560    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:38.891570    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:41.410890    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:46.411887    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:46.412065    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:46.433700    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:46.433786    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:46.448551    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:46.448618    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:46.459452    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:46.459519    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:46.469778    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:46.469851    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:46.480553    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:46.480618    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:46.499380    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:46.499452    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:46.510315    4588 logs.go:276] 0 containers: []
	W0731 12:06:46.510327    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:46.510390    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:46.520518    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:46.520538    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:46.520544    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:46.532588    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:46.532600    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:46.550018    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:46.550030    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:46.561372    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:46.561386    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:46.585168    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:46.585179    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:46.596583    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:46.596594    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:46.635815    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:46.635825    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:46.649367    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:46.649381    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:46.661306    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:46.661323    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:46.672474    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:46.672485    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:46.687288    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:46.687300    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:46.698841    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:46.698851    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:46.737539    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:46.737549    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:46.752144    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:46.752154    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:46.789828    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:46.789840    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:46.793948    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:46.793956    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:46.808364    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:46.808375    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:49.324293    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:06:54.326501    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:06:54.326693    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:06:54.342310    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:06:54.342402    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:06:54.358758    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:06:54.358836    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:06:54.373159    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:06:54.373230    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:06:54.383869    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:06:54.383946    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:06:54.394615    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:06:54.394686    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:06:54.404723    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:06:54.404800    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:06:54.415512    4588 logs.go:276] 0 containers: []
	W0731 12:06:54.415523    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:06:54.415583    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:06:54.426806    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:06:54.426853    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:06:54.426859    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:06:54.461605    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:06:54.461616    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:06:54.505418    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:06:54.505431    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:06:54.516804    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:06:54.516816    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:06:54.521097    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:06:54.521104    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:06:54.533426    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:06:54.533437    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:06:54.558336    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:06:54.558345    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:06:54.598256    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:06:54.598277    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:06:54.613224    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:06:54.613235    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:06:54.624970    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:06:54.624983    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:06:54.637168    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:06:54.637179    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:06:54.654389    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:06:54.654401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:06:54.668534    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:06:54.668549    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:06:54.680174    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:06:54.680186    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:06:54.695951    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:06:54.695961    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:06:54.715864    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:06:54.715879    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:06:54.729776    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:06:54.729786    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:06:57.248330    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:02.249955    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:02.250192    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:02.270048    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:02.270151    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:02.284326    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:02.284419    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:02.298221    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:02.298291    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:02.309307    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:02.309373    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:02.320159    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:02.320225    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:02.330744    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:02.330841    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:02.347395    4588 logs.go:276] 0 containers: []
	W0731 12:07:02.347406    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:02.347463    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:02.357375    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:02.357392    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:02.357397    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:02.396115    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:02.396124    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:02.407451    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:02.407462    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:02.423351    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:02.423364    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:02.438793    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:02.438804    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:02.462181    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:02.462195    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:02.475975    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:02.475985    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:02.500117    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:02.500125    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:02.512905    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:02.512917    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:02.529918    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:02.529929    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:02.541584    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:02.541598    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:02.553839    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:02.553852    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:02.593930    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:02.593944    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:02.608115    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:02.608133    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:02.622544    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:02.622558    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:02.636225    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:02.636236    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:02.640994    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:02.641001    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:05.181506    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:10.183697    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:10.183958    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:10.205112    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:10.205217    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:10.220609    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:10.220689    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:10.233129    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:10.233212    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:10.244437    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:10.244503    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:10.255930    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:10.256002    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:10.270551    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:10.270620    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:10.280907    4588 logs.go:276] 0 containers: []
	W0731 12:07:10.280921    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:10.280981    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:10.291471    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:10.291490    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:10.291497    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:10.296076    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:10.296082    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:10.318733    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:10.318745    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:10.330179    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:10.330190    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:10.370781    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:10.370794    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:10.390030    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:10.390041    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:10.401207    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:10.401220    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:10.418166    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:10.418175    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:10.431006    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:10.431017    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:10.469740    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:10.469757    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:10.504063    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:10.504073    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:10.525135    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:10.525147    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:10.541057    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:10.541070    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:10.558339    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:10.558350    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:10.571662    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:10.571673    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:10.582641    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:10.582652    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:10.605326    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:10.605334    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:13.119072    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:18.121862    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:18.122204    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:18.154754    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:18.154885    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:18.175143    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:18.175242    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:18.189466    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:18.189548    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:18.201197    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:18.201274    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:18.212018    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:18.212084    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:18.222689    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:18.222758    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:18.233252    4588 logs.go:276] 0 containers: []
	W0731 12:07:18.233264    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:18.233324    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:18.244250    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:18.244267    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:18.244273    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:18.249218    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:18.249228    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:18.286364    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:18.286375    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:18.298185    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:18.298198    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:18.315729    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:18.315741    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:18.327664    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:18.327674    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:18.352974    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:18.352983    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:18.368860    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:18.368873    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:18.395383    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:18.395395    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:18.412247    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:18.412257    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:18.450749    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:18.450760    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:18.464966    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:18.464977    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:18.502913    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:18.502923    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:18.519265    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:18.519275    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:18.538309    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:18.538320    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:18.550030    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:18.550041    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:18.561461    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:18.561473    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:21.079611    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:26.082308    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:26.082739    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:26.129863    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:26.130008    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:26.150723    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:26.150828    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:26.167233    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:26.167315    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:26.180093    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:26.180168    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:26.190588    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:26.190660    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:26.201805    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:26.201880    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:26.212499    4588 logs.go:276] 0 containers: []
	W0731 12:07:26.212509    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:26.212564    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:26.223015    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:26.223032    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:26.223039    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:26.247253    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:26.247262    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:26.260266    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:26.260277    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:26.275452    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:26.275463    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:26.287252    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:26.287264    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:26.299463    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:26.299472    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:26.314947    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:26.314958    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:26.327807    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:26.327822    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:26.365335    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:26.365356    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:26.378698    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:26.378709    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:26.382605    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:26.382611    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:26.400567    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:26.400581    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:26.414471    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:26.414481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:26.428541    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:26.428551    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:26.466610    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:26.466622    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:26.481007    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:26.481019    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:26.519589    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:26.519599    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:29.035934    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:34.038650    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:34.038933    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:34.072543    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:34.072704    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:34.091849    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:34.091936    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:34.105382    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:34.105460    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:34.117802    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:34.117862    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:34.128433    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:34.128493    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:34.139371    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:34.139442    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:34.150608    4588 logs.go:276] 0 containers: []
	W0731 12:07:34.150621    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:34.150682    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:34.161930    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:34.161950    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:34.161956    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:34.168389    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:34.168397    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:34.186834    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:34.186846    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:34.224099    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:34.224110    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:34.237636    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:34.237652    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:34.277698    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:34.277711    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:34.291479    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:34.291490    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:34.305550    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:34.305559    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:34.324678    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:34.324690    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:34.335737    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:34.335748    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:34.372446    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:34.372458    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:34.384349    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:34.384361    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:34.408951    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:34.408960    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:34.421540    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:34.421554    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:34.433508    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:34.433521    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:34.451775    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:34.451790    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:34.465579    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:34.465589    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:36.978536    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:41.979453    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:41.979662    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:41.999904    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:42.000010    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:42.014967    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:42.015039    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:42.027094    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:42.027171    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:42.038293    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:42.038366    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:42.048849    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:42.048915    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:42.062588    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:42.062663    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:42.073929    4588 logs.go:276] 0 containers: []
	W0731 12:07:42.073940    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:42.073997    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:42.083938    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:42.083957    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:42.083962    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:42.123444    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:42.123453    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:42.144029    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:42.144040    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:42.159703    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:42.159714    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:42.176470    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:42.176481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:42.188082    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:42.188094    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:42.192745    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:42.192752    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:42.228422    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:42.228433    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:42.268458    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:42.268471    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:42.280236    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:42.280247    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:42.294358    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:42.294368    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:42.305492    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:42.305504    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:42.327923    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:42.327930    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:42.345979    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:42.345989    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:42.359347    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:42.359358    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:42.370852    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:42.370864    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:42.383091    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:42.383105    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:44.896815    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:49.898946    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:49.899111    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:49.914468    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:49.914550    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:49.926708    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:49.926779    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:49.937683    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:49.937746    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:49.948118    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:49.948186    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:49.958506    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:49.958565    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:49.968980    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:49.969046    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:49.982700    4588 logs.go:276] 0 containers: []
	W0731 12:07:49.982711    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:49.982768    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:49.993379    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:49.993396    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:49.993401    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:07:50.010901    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:50.010913    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:50.022418    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:50.022428    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:50.056874    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:50.056885    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:50.071023    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:50.071034    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:50.082238    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:50.082248    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:50.106065    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:50.106077    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:50.117903    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:50.117913    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:50.129826    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:50.129836    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:50.141029    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:50.141039    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:50.156614    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:50.156625    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:50.175388    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:50.175399    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:50.188598    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:50.188609    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:50.227512    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:50.227523    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:50.243430    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:50.243440    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:50.256427    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:50.256438    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:50.295748    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:50.295758    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:52.802227    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:07:57.804368    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:07:57.804506    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:07:57.815744    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:07:57.815808    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:07:57.826481    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:07:57.826539    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:07:57.836988    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:07:57.837058    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:07:57.847512    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:07:57.847580    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:07:57.858394    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:07:57.858454    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:07:57.868323    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:07:57.868383    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:07:57.878490    4588 logs.go:276] 0 containers: []
	W0731 12:07:57.878501    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:07:57.878560    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:07:57.888531    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:07:57.888547    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:07:57.888552    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:07:57.900804    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:07:57.900815    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:07:57.917523    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:07:57.917534    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:07:57.929405    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:07:57.929417    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:07:57.968241    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:07:57.968250    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:07:58.005336    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:07:58.005350    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:07:58.019546    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:07:58.019558    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:07:58.056651    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:07:58.056661    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:07:58.071114    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:07:58.071127    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:07:58.082114    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:07:58.082125    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:07:58.094483    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:07:58.094494    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:07:58.099184    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:07:58.099190    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:07:58.111874    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:07:58.111885    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:07:58.127115    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:07:58.127126    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:07:58.144337    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:07:58.144348    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:07:58.155841    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:07:58.155851    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:07:58.180792    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:07:58.180803    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:00.695235    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:05.697610    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:05.697876    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:05.723219    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:05.723316    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:05.739479    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:05.739564    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:05.752441    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:05.752533    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:05.763378    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:05.763449    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:05.781029    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:05.781106    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:05.792072    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:05.792136    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:05.804845    4588 logs.go:276] 0 containers: []
	W0731 12:08:05.804856    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:05.804917    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:05.815664    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:05.815682    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:05.815689    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:05.855428    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:05.855447    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:05.860755    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:05.860771    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:05.881279    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:05.881290    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:05.892723    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:05.892736    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:05.903794    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:05.903804    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:05.917906    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:05.917915    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:05.932480    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:05.932497    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:05.943737    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:05.943751    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:05.955922    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:05.955932    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:05.967631    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:05.967641    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:06.003538    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:06.003551    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:06.015126    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:06.015136    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:06.033575    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:06.033587    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:06.047167    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:06.047180    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:06.083726    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:06.083739    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:06.097886    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:06.097897    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:08.623699    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:13.626068    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:13.626419    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:13.660610    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:13.660747    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:13.680251    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:13.680349    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:13.695429    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:13.695517    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:13.708400    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:13.708485    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:13.720569    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:13.720635    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:13.732052    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:13.732116    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:13.742412    4588 logs.go:276] 0 containers: []
	W0731 12:08:13.742424    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:13.742486    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:13.753070    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:13.753086    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:13.753093    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:13.767799    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:13.767815    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:13.779131    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:13.779143    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:13.791429    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:13.791440    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:13.806516    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:13.806525    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:13.818309    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:13.818325    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:13.857206    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:13.857218    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:13.862759    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:13.862766    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:13.897209    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:13.897220    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:13.935231    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:13.935245    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:13.950689    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:13.950700    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:13.961724    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:13.961734    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:13.983547    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:13.983555    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:14.000972    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:14.000982    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:14.012616    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:14.012628    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:14.027070    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:14.027082    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:14.041235    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:14.041245    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:16.554857    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:21.556464    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:21.556893    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:21.595072    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:21.595213    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:21.616768    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:21.616887    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:21.632210    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:21.632295    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:21.644603    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:21.644680    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:21.656493    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:21.656565    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:21.667414    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:21.667487    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:21.677750    4588 logs.go:276] 0 containers: []
	W0731 12:08:21.677763    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:21.677822    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:21.688270    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:21.688290    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:21.688297    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:21.692488    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:21.692496    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:21.703824    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:21.703836    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:21.728385    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:21.728395    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:21.752124    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:21.752137    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:21.767965    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:21.767975    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:21.779792    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:21.779805    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:21.794256    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:21.794267    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:21.808424    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:21.808434    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:21.823453    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:21.823465    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:21.836913    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:21.836924    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:21.848330    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:21.848342    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:21.859809    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:21.859819    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:21.898412    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:21.898424    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:21.936070    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:21.936084    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:21.974342    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:21.974353    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:21.985777    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:21.985787    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:24.499193    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:29.501449    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:29.501881    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:29.541852    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:29.541996    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:29.565283    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:29.565395    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:29.580570    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:29.580653    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:29.593317    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:29.593392    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:29.604400    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:29.604467    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:29.620091    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:29.620162    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:29.636216    4588 logs.go:276] 0 containers: []
	W0731 12:08:29.636229    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:29.636288    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:29.647420    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:29.647437    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:29.647444    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:29.660974    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:29.660985    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:29.673135    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:29.673145    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:29.685367    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:29.685377    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:29.723197    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:29.723217    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:29.727525    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:29.727534    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:29.739470    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:29.739481    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:29.755441    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:29.755453    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:29.773330    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:29.773341    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:29.784927    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:29.784937    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:29.806789    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:29.806802    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:29.845355    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:29.845369    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:29.856944    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:29.856955    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:29.868376    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:29.868390    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:29.911964    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:29.911978    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:29.926201    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:29.926210    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:29.940029    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:29.940040    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:32.458981    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:37.461296    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:37.461763    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:08:37.499271    4588 logs.go:276] 2 containers: [85a597c1b3a9 4723c1374220]
	I0731 12:08:37.499429    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:08:37.524082    4588 logs.go:276] 2 containers: [b6e0c3ab1ac3 ec713c22bdd4]
	I0731 12:08:37.524194    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:08:37.538783    4588 logs.go:276] 1 containers: [fe1506c139ee]
	I0731 12:08:37.538872    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:08:37.550619    4588 logs.go:276] 2 containers: [99a34fecb39e b381bfc4361b]
	I0731 12:08:37.550685    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:08:37.561042    4588 logs.go:276] 1 containers: [29599a6845e8]
	I0731 12:08:37.561108    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:08:37.571221    4588 logs.go:276] 2 containers: [b0f7d793953e 96caf573f6dd]
	I0731 12:08:37.571286    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:08:37.581629    4588 logs.go:276] 0 containers: []
	W0731 12:08:37.581641    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:08:37.581701    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:08:37.592579    4588 logs.go:276] 2 containers: [a653a1f00d16 ea76bc57ae2b]
	I0731 12:08:37.592597    4588 logs.go:123] Gathering logs for kube-controller-manager [b0f7d793953e] ...
	I0731 12:08:37.592603    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f7d793953e"
	I0731 12:08:37.609903    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:08:37.609913    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:08:37.632728    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:08:37.632736    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:08:37.668778    4588 logs.go:123] Gathering logs for coredns [fe1506c139ee] ...
	I0731 12:08:37.668789    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe1506c139ee"
	I0731 12:08:37.680708    4588 logs.go:123] Gathering logs for kube-scheduler [b381bfc4361b] ...
	I0731 12:08:37.680719    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b381bfc4361b"
	I0731 12:08:37.696402    4588 logs.go:123] Gathering logs for kube-proxy [29599a6845e8] ...
	I0731 12:08:37.696412    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29599a6845e8"
	I0731 12:08:37.708226    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:08:37.708237    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:08:37.720314    4588 logs.go:123] Gathering logs for kube-apiserver [4723c1374220] ...
	I0731 12:08:37.720329    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4723c1374220"
	I0731 12:08:37.758882    4588 logs.go:123] Gathering logs for etcd [b6e0c3ab1ac3] ...
	I0731 12:08:37.758896    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b6e0c3ab1ac3"
	I0731 12:08:37.776241    4588 logs.go:123] Gathering logs for kube-controller-manager [96caf573f6dd] ...
	I0731 12:08:37.776253    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96caf573f6dd"
	I0731 12:08:37.789985    4588 logs.go:123] Gathering logs for storage-provisioner [ea76bc57ae2b] ...
	I0731 12:08:37.789996    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea76bc57ae2b"
	I0731 12:08:37.801881    4588 logs.go:123] Gathering logs for kube-apiserver [85a597c1b3a9] ...
	I0731 12:08:37.801892    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85a597c1b3a9"
	I0731 12:08:37.816063    4588 logs.go:123] Gathering logs for etcd [ec713c22bdd4] ...
	I0731 12:08:37.816077    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec713c22bdd4"
	I0731 12:08:37.830998    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:08:37.831010    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 12:08:37.869569    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:08:37.869582    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:08:37.874273    4588 logs.go:123] Gathering logs for kube-scheduler [99a34fecb39e] ...
	I0731 12:08:37.874284    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99a34fecb39e"
	I0731 12:08:37.887856    4588 logs.go:123] Gathering logs for storage-provisioner [a653a1f00d16] ...
	I0731 12:08:37.887874    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a653a1f00d16"
	I0731 12:08:40.404644    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:45.407336    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:45.407409    4588 kubeadm.go:597] duration metric: took 4m4.175951416s to restartPrimaryControlPlane
	W0731 12:08:45.407485    4588 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0731 12:08:45.407517    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0731 12:08:46.483656    4588 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076142042s)
	I0731 12:08:46.483709    4588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 12:08:46.488535    4588 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 12:08:46.491462    4588 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 12:08:46.494933    4588 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 12:08:46.494940    4588 kubeadm.go:157] found existing configuration files:
	
	I0731 12:08:46.494976    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf
	I0731 12:08:46.497430    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 12:08:46.497457    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 12:08:46.500768    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf
	I0731 12:08:46.503634    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 12:08:46.503657    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 12:08:46.506331    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf
	I0731 12:08:46.509327    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 12:08:46.509352    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 12:08:46.512127    4588 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf
	I0731 12:08:46.514695    4588 kubeadm.go:163] "https://control-plane.minikube.internal:50507" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50507 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 12:08:46.514723    4588 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 12:08:46.517624    4588 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 12:08:46.535067    4588 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0731 12:08:46.535097    4588 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 12:08:46.584073    4588 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 12:08:46.584135    4588 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 12:08:46.584188    4588 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 12:08:46.634798    4588 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 12:08:46.637733    4588 out.go:204]   - Generating certificates and keys ...
	I0731 12:08:46.637795    4588 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 12:08:46.637837    4588 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 12:08:46.637881    4588 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 12:08:46.637917    4588 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 12:08:46.637955    4588 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 12:08:46.637991    4588 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 12:08:46.638025    4588 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 12:08:46.638059    4588 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 12:08:46.638098    4588 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 12:08:46.638140    4588 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 12:08:46.638177    4588 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 12:08:46.638208    4588 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 12:08:46.729165    4588 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 12:08:46.790443    4588 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 12:08:46.912439    4588 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 12:08:47.014664    4588 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 12:08:47.040835    4588 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 12:08:47.041261    4588 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 12:08:47.041287    4588 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 12:08:47.128023    4588 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 12:08:47.136181    4588 out.go:204]   - Booting up control plane ...
	I0731 12:08:47.136341    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 12:08:47.136396    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 12:08:47.136441    4588 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 12:08:47.136596    4588 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 12:08:47.136703    4588 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 12:08:52.133448    4588 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004445 seconds
	I0731 12:08:52.133564    4588 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 12:08:52.139325    4588 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 12:08:52.649078    4588 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 12:08:52.649222    4588 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-532000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 12:08:53.155874    4588 kubeadm.go:310] [bootstrap-token] Using token: trl3uf.qsefqlsp7p6ue2xn
	I0731 12:08:53.159383    4588 out.go:204]   - Configuring RBAC rules ...
	I0731 12:08:53.159436    4588 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 12:08:53.159523    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 12:08:53.163260    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 12:08:53.164255    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 12:08:53.165041    4588 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 12:08:53.165911    4588 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 12:08:53.169581    4588 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 12:08:53.349712    4588 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 12:08:53.560368    4588 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 12:08:53.560858    4588 kubeadm.go:310] 
	I0731 12:08:53.560893    4588 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 12:08:53.560901    4588 kubeadm.go:310] 
	I0731 12:08:53.560938    4588 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 12:08:53.560941    4588 kubeadm.go:310] 
	I0731 12:08:53.560954    4588 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 12:08:53.560987    4588 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 12:08:53.561016    4588 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 12:08:53.561019    4588 kubeadm.go:310] 
	I0731 12:08:53.561046    4588 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 12:08:53.561049    4588 kubeadm.go:310] 
	I0731 12:08:53.561072    4588 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 12:08:53.561075    4588 kubeadm.go:310] 
	I0731 12:08:53.561104    4588 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 12:08:53.561142    4588 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 12:08:53.561179    4588 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 12:08:53.561184    4588 kubeadm.go:310] 
	I0731 12:08:53.561229    4588 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 12:08:53.561264    4588 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 12:08:53.561267    4588 kubeadm.go:310] 
	I0731 12:08:53.561310    4588 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token trl3uf.qsefqlsp7p6ue2xn \
	I0731 12:08:53.561355    4588 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d \
	I0731 12:08:53.561374    4588 kubeadm.go:310] 	--control-plane 
	I0731 12:08:53.561376    4588 kubeadm.go:310] 
	I0731 12:08:53.561424    4588 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 12:08:53.561428    4588 kubeadm.go:310] 
	I0731 12:08:53.561469    4588 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token trl3uf.qsefqlsp7p6ue2xn \
	I0731 12:08:53.561521    4588 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c5979e1039b837660fe1f78eca702be07aacac834fdbf3725eabed57f6add83d 
	I0731 12:08:53.561691    4588 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 12:08:53.561715    4588 cni.go:84] Creating CNI manager for ""
	I0731 12:08:53.561724    4588 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:08:53.565801    4588 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 12:08:53.572998    4588 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 12:08:53.576367    4588 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 12:08:53.581194    4588 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 12:08:53.581247    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 12:08:53.581258    4588 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-532000 minikube.k8s.io/updated_at=2024_07_31T12_08_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=stopped-upgrade-532000 minikube.k8s.io/primary=true
	I0731 12:08:53.585034    4588 ops.go:34] apiserver oom_adj: -16
	I0731 12:08:53.622700    4588 kubeadm.go:1113] duration metric: took 41.498125ms to wait for elevateKubeSystemPrivileges
	I0731 12:08:53.622714    4588 kubeadm.go:394] duration metric: took 4m12.404947s to StartCluster
	I0731 12:08:53.622724    4588 settings.go:142] acquiring lock: {Name:mk8345ab3fe8ab5ac7063435ec374691aa431221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:08:53.622811    4588 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:08:53.623258    4588 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/kubeconfig: {Name:mk4905546f9b19d2ca153ee2e30398b887795222 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:08:53.623483    4588 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:08:53.623498    4588 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 12:08:53.623538    4588 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-532000"
	I0731 12:08:53.623549    4588 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-532000"
	W0731 12:08:53.623553    4588 addons.go:243] addon storage-provisioner should already be in state true
	I0731 12:08:53.623564    4588 host.go:66] Checking if "stopped-upgrade-532000" exists ...
	I0731 12:08:53.623570    4588 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-532000"
	I0731 12:08:53.623584    4588 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-532000"
	I0731 12:08:53.623630    4588 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:08:53.624746    4588 kapi.go:59] client config for stopped-upgrade-532000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/stopped-upgrade-532000/client.key", CAFile:"/Users/jenkins/minikube-integration/19356-1202/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bd41b0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 12:08:53.624865    4588 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-532000"
	W0731 12:08:53.624870    4588 addons.go:243] addon default-storageclass should already be in state true
	I0731 12:08:53.624880    4588 host.go:66] Checking if "stopped-upgrade-532000" exists ...
	I0731 12:08:53.628027    4588 out.go:177] * Verifying Kubernetes components...
	I0731 12:08:53.628333    4588 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 12:08:53.631157    4588 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 12:08:53.631163    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:08:53.633951    4588 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 12:08:53.637788    4588 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 12:08:53.641966    4588 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:08:53.641973    4588 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 12:08:53.641979    4588 sshutil.go:53] new ssh client: &{IP:localhost Port:50472 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/stopped-upgrade-532000/id_rsa Username:docker}
	I0731 12:08:53.738599    4588 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 12:08:53.743775    4588 api_server.go:52] waiting for apiserver process to appear ...
	I0731 12:08:53.743817    4588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 12:08:53.747505    4588 api_server.go:72] duration metric: took 124.009167ms to wait for apiserver process to appear ...
	I0731 12:08:53.747512    4588 api_server.go:88] waiting for apiserver healthz status ...
	I0731 12:08:53.747518    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:08:53.790745    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 12:08:53.820616    4588 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 12:08:58.749567    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:08:58.749604    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:03.749844    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:03.749884    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:08.750126    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:08.750165    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:13.750959    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:13.751004    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:18.751643    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:18.751667    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:23.752424    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:23.752463    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0731 12:09:24.144959    4588 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0731 12:09:24.148355    4588 out.go:177] * Enabled addons: storage-provisioner
	I0731 12:09:24.155159    4588 addons.go:510] duration metric: took 30.532148666s for enable addons: enabled=[storage-provisioner]
	I0731 12:09:28.753504    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:28.753540    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:33.755045    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:33.755070    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:38.756847    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:38.756887    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:43.758999    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:43.759040    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:48.761172    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:48.761216    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:09:53.763389    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:09:53.763483    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:09:53.774489    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:09:53.774565    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:09:53.785288    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:09:53.785363    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:09:53.798626    4588 logs.go:276] 2 containers: [eea97bc0e240 6c8c8587cb42]
	I0731 12:09:53.798702    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:09:53.809100    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:09:53.809175    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:09:53.823624    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:09:53.823697    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:09:53.834199    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:09:53.834268    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:09:53.844880    4588 logs.go:276] 0 containers: []
	W0731 12:09:53.844891    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:09:53.844949    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:09:53.855151    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:09:53.855167    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:09:53.855173    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:09:53.866721    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:09:53.866733    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:09:53.882354    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:09:53.882366    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:09:53.893989    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:09:53.893999    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:09:53.918990    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:09:53.918997    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:09:53.923000    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:09:53.923008    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:09:53.958531    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:09:53.958542    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:09:53.969879    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:09:53.969892    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:09:53.987398    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:09:53.987409    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:09:53.999192    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:09:53.999202    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:09:54.010458    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:09:54.010471    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:09:54.043533    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:09:54.043630    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:09:54.044125    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:09:54.044130    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:09:54.058410    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:09:54.058420    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:09:54.072509    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:09:54.072521    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:09:54.072547    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:09:54.072552    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:09:54.072555    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:09:54.072560    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:09:54.072563    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:04.076677    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:10:09.077743    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:10:09.078179    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:10:09.116766    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:10:09.116882    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:10:09.140170    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:10:09.140282    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:10:09.155184    4588 logs.go:276] 2 containers: [eea97bc0e240 6c8c8587cb42]
	I0731 12:10:09.155258    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:10:09.167789    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:10:09.167856    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:10:09.179022    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:10:09.179090    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:10:09.189645    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:10:09.189708    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:10:09.199964    4588 logs.go:276] 0 containers: []
	W0731 12:10:09.199975    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:10:09.200029    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:10:09.210852    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:10:09.210867    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:10:09.210873    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:10:09.215490    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:10:09.215498    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:10:09.232090    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:10:09.232101    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:10:09.244066    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:10:09.244079    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:10:09.268916    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:10:09.268924    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:10:09.302270    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:09.302360    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:09.302854    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:10:09.302858    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:10:09.336792    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:10:09.336806    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:10:09.350949    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:10:09.350960    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:10:09.364891    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:10:09.364899    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:10:09.376719    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:10:09.376732    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:10:09.388292    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:10:09.388304    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:10:09.405403    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:10:09.405416    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:10:09.417318    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:10:09.417332    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:10:09.430583    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:09.430593    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:10:09.430622    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:10:09.430626    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:09.430630    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:09.430634    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:09.430637    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:19.434666    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:10:24.437088    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:10:24.437490    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:10:24.470569    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:10:24.470698    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:10:24.490735    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:10:24.490848    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:10:24.505892    4588 logs.go:276] 2 containers: [eea97bc0e240 6c8c8587cb42]
	I0731 12:10:24.505969    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:10:24.518101    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:10:24.518167    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:10:24.528957    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:10:24.529029    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:10:24.543331    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:10:24.543409    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:10:24.553683    4588 logs.go:276] 0 containers: []
	W0731 12:10:24.553693    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:10:24.553752    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:10:24.564009    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:10:24.564022    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:10:24.564028    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:10:24.568321    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:10:24.568328    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:10:24.580235    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:10:24.580249    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:10:24.592631    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:10:24.592643    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:10:24.604316    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:10:24.604330    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:10:24.627150    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:10:24.627158    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:10:24.647559    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:10:24.647572    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:10:24.659122    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:10:24.659132    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:10:24.691515    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:24.691606    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:24.692100    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:10:24.692104    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:10:24.728255    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:10:24.728266    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:10:24.742587    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:10:24.742596    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:10:24.757245    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:10:24.757257    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:10:24.768231    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:10:24.768243    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:10:24.783844    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:24.783857    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:10:24.783880    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:10:24.783884    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:24.783887    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:24.783891    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:24.783894    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:34.787959    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:10:39.790646    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:10:39.791068    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:10:39.828010    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:10:39.828137    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:10:39.848238    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:10:39.848347    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:10:39.863662    4588 logs.go:276] 2 containers: [eea97bc0e240 6c8c8587cb42]
	I0731 12:10:39.863739    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:10:39.876154    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:10:39.876222    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:10:39.887448    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:10:39.887514    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:10:39.898052    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:10:39.898125    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:10:39.908895    4588 logs.go:276] 0 containers: []
	W0731 12:10:39.908908    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:10:39.908965    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:10:39.920069    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:10:39.920086    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:10:39.920092    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:10:39.953893    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:39.953987    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:39.954511    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:10:39.954516    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:10:39.969883    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:10:39.969896    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:10:39.985137    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:10:39.985150    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:10:40.003333    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:10:40.003344    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:10:40.015137    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:10:40.015147    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:10:40.019271    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:10:40.019278    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:10:40.053402    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:10:40.053413    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:10:40.068065    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:10:40.068077    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:10:40.079844    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:10:40.079856    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:10:40.091893    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:10:40.091905    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:10:40.103634    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:10:40.103646    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:10:40.126590    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:10:40.126596    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:10:40.139280    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:40.139292    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:10:40.139317    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:10:40.139322    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:40.139326    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:40.139331    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:40.139334    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:10:50.143481    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:10:55.146241    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:10:55.146618    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:10:55.192530    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:10:55.192663    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:10:55.213021    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:10:55.213137    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:10:55.227558    4588 logs.go:276] 2 containers: [eea97bc0e240 6c8c8587cb42]
	I0731 12:10:55.227632    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:10:55.240716    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:10:55.240785    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:10:55.251175    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:10:55.251237    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:10:55.262017    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:10:55.262089    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:10:55.272832    4588 logs.go:276] 0 containers: []
	W0731 12:10:55.272842    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:10:55.272898    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:10:55.283381    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:10:55.283396    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:10:55.283402    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:10:55.304309    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:10:55.304321    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:10:55.315883    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:10:55.315896    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:10:55.330621    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:10:55.330636    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:10:55.352118    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:10:55.352128    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:10:55.364159    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:10:55.364171    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:10:55.383714    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:10:55.383726    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:10:55.395472    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:10:55.395485    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:10:55.418990    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:10:55.419000    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:10:55.453444    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:55.453540    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:55.454067    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:10:55.454073    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:10:55.458663    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:10:55.458672    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:10:55.493521    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:10:55.493535    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:10:55.510254    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:10:55.510268    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:10:55.522431    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:55.522442    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:10:55.522470    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:10:55.522475    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:10:55.522478    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:10:55.522482    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:10:55.522485    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:05.526459    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:11:10.528718    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:11:10.529141    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:11:10.566588    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:11:10.566715    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:11:10.587295    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:11:10.587397    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:11:10.602189    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:11:10.602265    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:11:10.614117    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:11:10.614183    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:11:10.624200    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:11:10.624262    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:11:10.634704    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:11:10.634768    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:11:10.645318    4588 logs.go:276] 0 containers: []
	W0731 12:11:10.645330    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:11:10.645383    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:11:10.656306    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:11:10.656323    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:11:10.656328    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:11:10.667987    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:11:10.667999    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:11:10.679457    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:11:10.679470    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:11:10.683639    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:11:10.683648    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:11:10.698030    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:11:10.698043    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:11:10.709455    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:11:10.709464    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:11:10.721360    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:11:10.721372    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:11:10.744563    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:11:10.744575    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:11:10.779259    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:10.779351    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:10.779844    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:11:10.779848    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:11:10.816438    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:11:10.816451    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:11:10.830872    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:11:10.830884    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:11:10.842138    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:11:10.842151    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:11:10.853857    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:11:10.853869    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:11:10.868579    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:11:10.868589    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:11:10.893953    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:11:10.893960    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:11:10.905003    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:10.905015    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:11:10.905040    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:11:10.905046    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:10.905050    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:10.905054    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:10.905056    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:20.909065    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:11:25.911210    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:11:25.911658    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:11:25.951321    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:11:25.951444    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:11:25.972986    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:11:25.973100    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:11:25.988914    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:11:25.988985    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:11:26.003350    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:11:26.003422    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:11:26.014511    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:11:26.014581    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:11:26.026295    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:11:26.026368    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:11:26.040042    4588 logs.go:276] 0 containers: []
	W0731 12:11:26.040053    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:11:26.040106    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:11:26.050575    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:11:26.050589    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:11:26.050594    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:11:26.082970    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:26.083061    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:26.083561    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:11:26.083565    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:11:26.097785    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:11:26.097793    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:11:26.112382    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:11:26.112392    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:11:26.128578    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:11:26.128590    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:11:26.143026    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:11:26.143037    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:11:26.158211    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:11:26.158224    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:11:26.169824    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:11:26.169837    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:11:26.181140    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:11:26.181151    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:11:26.185633    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:11:26.185640    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:11:26.221262    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:11:26.221273    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:11:26.232926    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:11:26.232937    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:11:26.244191    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:11:26.244204    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:11:26.265263    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:11:26.265272    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:11:26.277025    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:11:26.277036    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:11:26.302334    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:26.302342    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:11:26.302366    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:11:26.302369    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:26.302412    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:26.302419    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:26.302422    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:36.306492    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:11:41.309148    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:11:41.309595    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:11:41.348365    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:11:41.348527    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:11:41.371297    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:11:41.371413    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:11:41.387238    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:11:41.387314    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:11:41.399634    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:11:41.399701    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:11:41.412730    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:11:41.412797    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:11:41.424365    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:11:41.424429    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:11:41.434856    4588 logs.go:276] 0 containers: []
	W0731 12:11:41.434868    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:11:41.434923    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:11:41.444682    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:11:41.444701    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:11:41.444708    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:11:41.479132    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:41.479223    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:41.479716    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:11:41.479720    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:11:41.514223    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:11:41.514236    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:11:41.526450    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:11:41.526463    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:11:41.537510    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:11:41.537526    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:11:41.549500    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:11:41.549514    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:11:41.560851    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:11:41.560859    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:11:41.576135    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:11:41.576149    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:11:41.593538    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:11:41.593550    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:11:41.614452    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:11:41.614462    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:11:41.620677    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:11:41.620685    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:11:41.638587    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:11:41.638601    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:11:41.652013    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:11:41.652026    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:11:41.663758    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:11:41.663770    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:11:41.675080    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:11:41.675093    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:11:41.700301    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:41.700310    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:11:41.700335    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:11:41.700339    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:41.700342    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:41.700399    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:41.700403    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:51.704407    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:11:56.706655    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:11:56.706720    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:11:56.718586    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:11:56.718640    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:11:56.735737    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:11:56.735783    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:11:56.750316    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:11:56.750378    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:11:56.761813    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:11:56.761864    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:11:56.774381    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:11:56.774435    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:11:56.785610    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:11:56.785666    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:11:56.801722    4588 logs.go:276] 0 containers: []
	W0731 12:11:56.801731    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:11:56.801785    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:11:56.813985    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:11:56.814003    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:11:56.814008    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:11:56.861046    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:11:56.861061    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:11:56.878604    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:11:56.878616    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:11:56.894241    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:11:56.894253    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:11:56.919388    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:11:56.919398    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:11:56.952820    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:56.952917    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:56.953428    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:11:56.953437    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:11:56.966047    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:11:56.966056    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:11:56.977962    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:11:56.977978    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:11:56.995751    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:11:56.995766    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:11:57.008130    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:11:57.008138    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:11:57.022679    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:11:57.022692    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:11:57.039851    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:11:57.039862    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:11:57.054158    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:11:57.054166    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:11:57.066164    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:11:57.066176    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:11:57.079115    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:11:57.079128    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:11:57.084432    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:57.084442    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:11:57.084467    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:11:57.084472    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:11:57.084475    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:11:57.084479    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:57.084482    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:07.087317    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:12:12.089502    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:12:12.089689    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:12:12.111963    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:12:12.112072    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:12:12.128083    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:12:12.128164    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:12:12.141118    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:12:12.141183    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:12:12.152329    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:12:12.152398    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:12:12.162784    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:12:12.162852    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:12:12.173315    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:12:12.173372    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:12:12.183624    4588 logs.go:276] 0 containers: []
	W0731 12:12:12.183637    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:12:12.183690    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:12:12.194178    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:12:12.194196    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:12:12.194204    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:12:12.205799    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:12:12.205811    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:12:12.217015    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:12:12.217025    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:12:12.229030    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:12:12.229041    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:12:12.246392    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:12:12.246401    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:12:12.251199    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:12:12.251207    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:12:12.264971    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:12:12.264981    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:12:12.276524    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:12:12.276534    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:12:12.290343    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:12:12.290352    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:12:12.302045    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:12:12.302057    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:12:12.314749    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:12:12.314762    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:12:12.347414    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:12.347506    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:12.348007    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:12:12.348012    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:12:12.382576    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:12:12.382591    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:12:12.394466    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:12:12.394477    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:12:12.410449    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:12:12.410459    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:12:12.436023    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:12.436031    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:12:12.436061    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:12:12.436065    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:12.436083    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:12.436150    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:12.436155    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:22.440193    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:12:27.443023    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:12:27.443315    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:12:27.468907    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:12:27.469023    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:12:27.485814    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:12:27.485905    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:12:27.498415    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:12:27.498488    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:12:27.510153    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:12:27.510217    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:12:27.525198    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:12:27.525269    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:12:27.535268    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:12:27.535333    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:12:27.546263    4588 logs.go:276] 0 containers: []
	W0731 12:12:27.546276    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:12:27.546331    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:12:27.556856    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:12:27.556873    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:12:27.556877    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:12:27.574696    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:12:27.574706    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:12:27.586225    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:12:27.586238    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:12:27.598039    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:12:27.598054    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:12:27.633079    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:12:27.633094    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:12:27.648727    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:12:27.648741    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:12:27.660662    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:12:27.660674    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:12:27.665088    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:12:27.665093    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:12:27.676326    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:12:27.676339    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:12:27.699230    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:12:27.699235    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:12:27.731714    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:27.731804    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:27.732298    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:12:27.732302    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:12:27.746961    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:12:27.746973    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:12:27.761794    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:12:27.761806    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:12:27.775960    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:12:27.775973    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:12:27.789387    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:12:27.789398    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:12:27.800641    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:27.800655    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:12:27.800684    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:12:27.800688    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:27.800692    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:27.800696    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:27.800699    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:37.804830    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:12:42.807700    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:12:42.808134    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0731 12:12:42.845779    4588 logs.go:276] 1 containers: [34a8af120584]
	I0731 12:12:42.845910    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0731 12:12:42.867489    4588 logs.go:276] 1 containers: [6fd31bf6e898]
	I0731 12:12:42.867600    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0731 12:12:42.887081    4588 logs.go:276] 4 containers: [3694e0b726a0 7c0be65abf74 eea97bc0e240 6c8c8587cb42]
	I0731 12:12:42.887159    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0731 12:12:42.899215    4588 logs.go:276] 1 containers: [92e698d65631]
	I0731 12:12:42.899276    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0731 12:12:42.910214    4588 logs.go:276] 1 containers: [9be235eb203b]
	I0731 12:12:42.910274    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0731 12:12:42.920902    4588 logs.go:276] 1 containers: [56ade160ea61]
	I0731 12:12:42.920976    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0731 12:12:42.931755    4588 logs.go:276] 0 containers: []
	W0731 12:12:42.931766    4588 logs.go:278] No container was found matching "kindnet"
	I0731 12:12:42.931826    4588 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0731 12:12:42.942575    4588 logs.go:276] 1 containers: [d5afdc805975]
	I0731 12:12:42.942594    4588 logs.go:123] Gathering logs for coredns [3694e0b726a0] ...
	I0731 12:12:42.942599    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3694e0b726a0"
	I0731 12:12:42.954699    4588 logs.go:123] Gathering logs for coredns [eea97bc0e240] ...
	I0731 12:12:42.954709    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eea97bc0e240"
	I0731 12:12:42.969992    4588 logs.go:123] Gathering logs for kube-scheduler [92e698d65631] ...
	I0731 12:12:42.970005    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92e698d65631"
	I0731 12:12:42.985744    4588 logs.go:123] Gathering logs for storage-provisioner [d5afdc805975] ...
	I0731 12:12:42.985755    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5afdc805975"
	I0731 12:12:42.997582    4588 logs.go:123] Gathering logs for container status ...
	I0731 12:12:42.997594    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 12:12:43.009863    4588 logs.go:123] Gathering logs for kubelet ...
	I0731 12:12:43.009875    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 12:12:43.042071    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:43.042164    4588 logs.go:138] Found kubelet problem: Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:43.042658    4588 logs.go:123] Gathering logs for dmesg ...
	I0731 12:12:43.042662    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 12:12:43.046870    4588 logs.go:123] Gathering logs for kube-apiserver [34a8af120584] ...
	I0731 12:12:43.046879    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a8af120584"
	I0731 12:12:43.062715    4588 logs.go:123] Gathering logs for etcd [6fd31bf6e898] ...
	I0731 12:12:43.062724    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fd31bf6e898"
	I0731 12:12:43.076568    4588 logs.go:123] Gathering logs for kube-proxy [9be235eb203b] ...
	I0731 12:12:43.076580    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9be235eb203b"
	I0731 12:12:43.087970    4588 logs.go:123] Gathering logs for kube-controller-manager [56ade160ea61] ...
	I0731 12:12:43.087982    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ade160ea61"
	I0731 12:12:43.109216    4588 logs.go:123] Gathering logs for describe nodes ...
	I0731 12:12:43.109224    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 12:12:43.173950    4588 logs.go:123] Gathering logs for coredns [7c0be65abf74] ...
	I0731 12:12:43.173964    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c0be65abf74"
	I0731 12:12:43.186737    4588 logs.go:123] Gathering logs for coredns [6c8c8587cb42] ...
	I0731 12:12:43.186749    4588 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6c8c8587cb42"
	I0731 12:12:43.199340    4588 logs.go:123] Gathering logs for Docker ...
	I0731 12:12:43.199355    4588 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0731 12:12:43.223766    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:43.223780    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 12:12:43.223811    4588 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 12:12:43.223816    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: W0731 19:09:06.588936   10455 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	W0731 12:12:43.223820    4588 out.go:239]   Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	  Jul 31 19:09:06 stopped-upgrade-532000 kubelet[10455]: E0731 19:09:06.588988   10455 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-532000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-532000' and this object
	I0731 12:12:43.223825    4588 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:43.223827    4588 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:53.224922    4588 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0731 12:12:58.227173    4588 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0731 12:12:58.234900    4588 out.go:177] 
	W0731 12:12:58.237707    4588 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0731 12:12:58.237725    4588 out.go:239] * 
	* 
	W0731 12:12:58.239116    4588 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:58.253702    4588 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.86s)

                                                
                                    
x
+
TestPause/serial/Start (9.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-209000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-209000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.780394s)

                                                
                                                
-- stdout --
	* [pause-209000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-209000" primary control-plane node in "pause-209000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-209000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-209000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-209000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-209000 -n pause-209000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-209000 -n pause-209000: exit status 7 (43.696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-209000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 : exit status 80 (10.055907041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-488000" primary control-plane node in "NoKubernetes-488000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-488000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-488000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000: exit status 7 (61.208833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 : exit status 80 (5.235146333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-488000
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000: exit status 7 (30.667542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 : exit status 80 (5.235396334s)

                                                
                                                
-- stdout --
	* [NoKubernetes-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-488000
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000: exit status 7 (66.925583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 : exit status 80 (5.250507625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-488000
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-488000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-488000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-488000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-488000 -n NoKubernetes-488000: exit status 7 (37.96875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-488000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.915786792s)

                                                
                                                
-- stdout --
	* [auto-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-693000" primary control-plane node in "auto-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:11:08.853459    4835 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:11:08.853604    4835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:08.853607    4835 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:08.853609    4835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:08.853748    4835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:11:08.854860    4835 out.go:298] Setting JSON to false
	I0731 12:11:08.871501    4835 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4237,"bootTime":1722448831,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:11:08.871586    4835 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:11:08.877952    4835 out.go:177] * [auto-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:11:08.884974    4835 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:11:08.885030    4835 notify.go:220] Checking for updates...
	I0731 12:11:08.892044    4835 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:11:08.895022    4835 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:11:08.898073    4835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:11:08.901037    4835 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:11:08.902465    4835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:11:08.905312    4835 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:11:08.905379    4835 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:11:08.905426    4835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:11:08.910028    4835 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:11:08.915022    4835 start.go:297] selected driver: qemu2
	I0731 12:11:08.915027    4835 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:11:08.915034    4835 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:11:08.917315    4835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:11:08.919978    4835 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:11:08.923132    4835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:11:08.923164    4835 cni.go:84] Creating CNI manager for ""
	I0731 12:11:08.923170    4835 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:11:08.923173    4835 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:11:08.923194    4835 start.go:340] cluster config:
	{Name:auto-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:11:08.926698    4835 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:08.934064    4835 out.go:177] * Starting "auto-693000" primary control-plane node in "auto-693000" cluster
	I0731 12:11:08.937978    4835 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:11:08.937991    4835 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:11:08.938005    4835 cache.go:56] Caching tarball of preloaded images
	I0731 12:11:08.938071    4835 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:08.938077    4835 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:11:08.938143    4835 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/auto-693000/config.json ...
	I0731 12:11:08.938155    4835 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/auto-693000/config.json: {Name:mk2c778180929c9f260d324936c8c8cf65a1ea7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:08.938370    4835 start.go:360] acquireMachinesLock for auto-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:08.938400    4835 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "auto-693000"
	I0731 12:11:08.938409    4835 start.go:93] Provisioning new machine with config: &{Name:auto-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:08.938440    4835 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:08.945994    4835 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:08.961461    4835 start.go:159] libmachine.API.Create for "auto-693000" (driver="qemu2")
	I0731 12:11:08.961500    4835 client.go:168] LocalClient.Create starting
	I0731 12:11:08.961558    4835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:08.961593    4835 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:08.961601    4835 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:08.961637    4835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:08.961662    4835 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:08.961675    4835 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:08.962047    4835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:09.116071    4835 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:09.279263    4835 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:09.279270    4835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:09.279522    4835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:09.289167    4835 main.go:141] libmachine: STDOUT: 
	I0731 12:11:09.289184    4835 main.go:141] libmachine: STDERR: 
	I0731 12:11:09.289228    4835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2 +20000M
	I0731 12:11:09.297099    4835 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:09.297113    4835 main.go:141] libmachine: STDERR: 
	I0731 12:11:09.297131    4835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:09.297136    4835 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:09.297149    4835 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:09.297171    4835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:ab:25:c1:19:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:09.298760    4835 main.go:141] libmachine: STDOUT: 
	I0731 12:11:09.298776    4835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:09.298797    4835 client.go:171] duration metric: took 337.296333ms to LocalClient.Create
	I0731 12:11:11.300964    4835 start.go:128] duration metric: took 2.362539084s to createHost
	I0731 12:11:11.301051    4835 start.go:83] releasing machines lock for "auto-693000", held for 2.362679291s
	W0731 12:11:11.301181    4835 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:11.311325    4835 out.go:177] * Deleting "auto-693000" in qemu2 ...
	W0731 12:11:11.343133    4835 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:11.343161    4835 start.go:729] Will try again in 5 seconds ...
	I0731 12:11:16.345308    4835 start.go:360] acquireMachinesLock for auto-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:16.345803    4835 start.go:364] duration metric: took 402.375µs to acquireMachinesLock for "auto-693000"
	I0731 12:11:16.345930    4835 start.go:93] Provisioning new machine with config: &{Name:auto-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:16.346213    4835 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:16.351888    4835 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:16.400656    4835 start.go:159] libmachine.API.Create for "auto-693000" (driver="qemu2")
	I0731 12:11:16.400708    4835 client.go:168] LocalClient.Create starting
	I0731 12:11:16.400827    4835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:16.400904    4835 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:16.400922    4835 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:16.400984    4835 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:16.401029    4835 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:16.401041    4835 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:16.401540    4835 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:16.562637    4835 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:16.685150    4835 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:16.685164    4835 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:16.685387    4835 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:16.694568    4835 main.go:141] libmachine: STDOUT: 
	I0731 12:11:16.694596    4835 main.go:141] libmachine: STDERR: 
	I0731 12:11:16.694649    4835 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2 +20000M
	I0731 12:11:16.702396    4835 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:16.702431    4835 main.go:141] libmachine: STDERR: 
	I0731 12:11:16.702450    4835 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:16.702459    4835 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:16.702468    4835 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:16.702509    4835 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:c9:76:9d:84:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/auto-693000/disk.qcow2
	I0731 12:11:16.704191    4835 main.go:141] libmachine: STDOUT: 
	I0731 12:11:16.704204    4835 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:16.704219    4835 client.go:171] duration metric: took 303.509875ms to LocalClient.Create
	I0731 12:11:18.705566    4835 start.go:128] duration metric: took 2.359364417s to createHost
	I0731 12:11:18.705616    4835 start.go:83] releasing machines lock for "auto-693000", held for 2.359831834s
	W0731 12:11:18.705762    4835 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:18.716063    4835 out.go:177] 
	W0731 12:11:18.720980    4835 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:11:18.720987    4835 out.go:239] * 
	* 
	W0731 12:11:18.721665    4835 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:11:18.737119    4835 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E0731 12:11:29.008670    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.763970333s)

                                                
                                                
-- stdout --
	* [kindnet-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-693000" primary control-plane node in "kindnet-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:11:20.828440    4946 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:11:20.828590    4946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:20.828596    4946 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:20.828599    4946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:20.828729    4946 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:11:20.829782    4946 out.go:298] Setting JSON to false
	I0731 12:11:20.846292    4946 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4249,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:11:20.846353    4946 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:11:20.852714    4946 out.go:177] * [kindnet-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:11:20.859687    4946 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:11:20.859755    4946 notify.go:220] Checking for updates...
	I0731 12:11:20.866657    4946 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:11:20.869678    4946 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:11:20.873687    4946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:11:20.876683    4946 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:11:20.879676    4946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:11:20.883059    4946 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:11:20.883121    4946 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:11:20.883179    4946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:11:20.886721    4946 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:11:20.893705    4946 start.go:297] selected driver: qemu2
	I0731 12:11:20.893710    4946 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:11:20.893717    4946 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:11:20.895947    4946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:11:20.900527    4946 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:11:20.903759    4946 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:11:20.903808    4946 cni.go:84] Creating CNI manager for "kindnet"
	I0731 12:11:20.903817    4946 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 12:11:20.903847    4946 start.go:340] cluster config:
	{Name:kindnet-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:11:20.907612    4946 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:20.915627    4946 out.go:177] * Starting "kindnet-693000" primary control-plane node in "kindnet-693000" cluster
	I0731 12:11:20.919674    4946 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:11:20.919688    4946 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:11:20.919697    4946 cache.go:56] Caching tarball of preloaded images
	I0731 12:11:20.919746    4946 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:20.919751    4946 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:11:20.919808    4946 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kindnet-693000/config.json ...
	I0731 12:11:20.919818    4946 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kindnet-693000/config.json: {Name:mk7a46a95da4dcb7d799ec46ec5d20d20bb7ebee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:20.920037    4946 start.go:360] acquireMachinesLock for kindnet-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:20.920069    4946 start.go:364] duration metric: took 26.167µs to acquireMachinesLock for "kindnet-693000"
	I0731 12:11:20.920079    4946 start.go:93] Provisioning new machine with config: &{Name:kindnet-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:20.920107    4946 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:20.928636    4946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:20.944847    4946 start.go:159] libmachine.API.Create for "kindnet-693000" (driver="qemu2")
	I0731 12:11:20.944874    4946 client.go:168] LocalClient.Create starting
	I0731 12:11:20.944939    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:20.944973    4946 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:20.944982    4946 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:20.945021    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:20.945046    4946 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:20.945059    4946 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:20.945405    4946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:21.103969    4946 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:21.161662    4946 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:21.161670    4946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:21.161894    4946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:21.171270    4946 main.go:141] libmachine: STDOUT: 
	I0731 12:11:21.171291    4946 main.go:141] libmachine: STDERR: 
	I0731 12:11:21.171336    4946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2 +20000M
	I0731 12:11:21.179371    4946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:21.179385    4946 main.go:141] libmachine: STDERR: 
	I0731 12:11:21.179407    4946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:21.179411    4946 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:21.179422    4946 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:21.179445    4946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:9b:c5:9b:bb:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:21.180986    4946 main.go:141] libmachine: STDOUT: 
	I0731 12:11:21.181000    4946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:21.181018    4946 client.go:171] duration metric: took 236.142417ms to LocalClient.Create
	I0731 12:11:23.183171    4946 start.go:128] duration metric: took 2.263079166s to createHost
	I0731 12:11:23.183234    4946 start.go:83] releasing machines lock for "kindnet-693000", held for 2.263194541s
	W0731 12:11:23.183323    4946 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:23.192872    4946 out.go:177] * Deleting "kindnet-693000" in qemu2 ...
	W0731 12:11:23.216156    4946 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:23.216192    4946 start.go:729] Will try again in 5 seconds ...
	I0731 12:11:28.218337    4946 start.go:360] acquireMachinesLock for kindnet-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:28.218898    4946 start.go:364] duration metric: took 434.917µs to acquireMachinesLock for "kindnet-693000"
	I0731 12:11:28.219017    4946 start.go:93] Provisioning new machine with config: &{Name:kindnet-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:28.219354    4946 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:28.227964    4946 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:28.272060    4946 start.go:159] libmachine.API.Create for "kindnet-693000" (driver="qemu2")
	I0731 12:11:28.272116    4946 client.go:168] LocalClient.Create starting
	I0731 12:11:28.272239    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:28.272302    4946 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:28.272316    4946 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:28.272373    4946 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:28.272418    4946 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:28.272428    4946 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:28.273097    4946 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:28.438213    4946 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:28.498470    4946 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:28.498480    4946 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:28.498706    4946 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:28.508046    4946 main.go:141] libmachine: STDOUT: 
	I0731 12:11:28.508067    4946 main.go:141] libmachine: STDERR: 
	I0731 12:11:28.508123    4946 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2 +20000M
	I0731 12:11:28.516170    4946 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:28.516185    4946 main.go:141] libmachine: STDERR: 
	I0731 12:11:28.516196    4946 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:28.516215    4946 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:28.516227    4946 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:28.516264    4946 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:46:05:d5:16:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kindnet-693000/disk.qcow2
	I0731 12:11:28.517893    4946 main.go:141] libmachine: STDOUT: 
	I0731 12:11:28.517909    4946 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:28.517921    4946 client.go:171] duration metric: took 245.803708ms to LocalClient.Create
	I0731 12:11:30.520103    4946 start.go:128] duration metric: took 2.300744625s to createHost
	I0731 12:11:30.520182    4946 start.go:83] releasing machines lock for "kindnet-693000", held for 2.301266s
	W0731 12:11:30.520655    4946 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:30.534292    4946 out.go:177] 
	W0731 12:11:30.538599    4946 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:11:30.538622    4946 out.go:239] * 
	* 
	W0731 12:11:30.541024    4946 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:11:30.553254    4946 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.885703875s)

                                                
                                                
-- stdout --
	* [calico-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-693000" primary control-plane node in "calico-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:11:32.799596    5059 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:11:32.799738    5059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:32.799743    5059 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:32.799746    5059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:32.799896    5059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:11:32.801006    5059 out.go:298] Setting JSON to false
	I0731 12:11:32.817228    5059 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4261,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:11:32.817286    5059 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:11:32.822739    5059 out.go:177] * [calico-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:11:32.829632    5059 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:11:32.829709    5059 notify.go:220] Checking for updates...
	I0731 12:11:32.836531    5059 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:11:32.839618    5059 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:11:32.843614    5059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:11:32.846565    5059 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:11:32.849614    5059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:11:32.853038    5059 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:11:32.853104    5059 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:11:32.853160    5059 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:11:32.857539    5059 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:11:32.864577    5059 start.go:297] selected driver: qemu2
	I0731 12:11:32.864582    5059 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:11:32.864587    5059 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:11:32.866872    5059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:11:32.871518    5059 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:11:32.874642    5059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:11:32.874673    5059 cni.go:84] Creating CNI manager for "calico"
	I0731 12:11:32.874678    5059 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0731 12:11:32.874711    5059 start.go:340] cluster config:
	{Name:calico-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:11:32.878936    5059 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:32.885588    5059 out.go:177] * Starting "calico-693000" primary control-plane node in "calico-693000" cluster
	I0731 12:11:32.889573    5059 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:11:32.889592    5059 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:11:32.889605    5059 cache.go:56] Caching tarball of preloaded images
	I0731 12:11:32.889685    5059 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:32.889690    5059 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:11:32.889760    5059 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/calico-693000/config.json ...
	I0731 12:11:32.889771    5059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/calico-693000/config.json: {Name:mk415548a774ce125b828997dd087cb0364bf63e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:32.889986    5059 start.go:360] acquireMachinesLock for calico-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:32.890019    5059 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "calico-693000"
	I0731 12:11:32.890029    5059 start.go:93] Provisioning new machine with config: &{Name:calico-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:32.890056    5059 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:32.897643    5059 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:32.914225    5059 start.go:159] libmachine.API.Create for "calico-693000" (driver="qemu2")
	I0731 12:11:32.914250    5059 client.go:168] LocalClient.Create starting
	I0731 12:11:32.914312    5059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:32.914343    5059 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:32.914351    5059 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:32.914384    5059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:32.914409    5059 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:32.914419    5059 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:32.914760    5059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:33.069564    5059 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:33.160991    5059 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:33.160997    5059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:33.161224    5059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:33.170928    5059 main.go:141] libmachine: STDOUT: 
	I0731 12:11:33.170945    5059 main.go:141] libmachine: STDERR: 
	I0731 12:11:33.171003    5059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2 +20000M
	I0731 12:11:33.179045    5059 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:33.179058    5059 main.go:141] libmachine: STDERR: 
	I0731 12:11:33.179074    5059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:33.179079    5059 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:33.179090    5059 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:33.179121    5059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:6d:07:4a:d4:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:33.180775    5059 main.go:141] libmachine: STDOUT: 
	I0731 12:11:33.180789    5059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:33.180816    5059 client.go:171] duration metric: took 266.558667ms to LocalClient.Create
	I0731 12:11:35.182986    5059 start.go:128] duration metric: took 2.292942042s to createHost
	I0731 12:11:35.183051    5059 start.go:83] releasing machines lock for "calico-693000", held for 2.293061166s
	W0731 12:11:35.183126    5059 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:35.192662    5059 out.go:177] * Deleting "calico-693000" in qemu2 ...
	W0731 12:11:35.220531    5059 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:35.220556    5059 start.go:729] Will try again in 5 seconds ...
	I0731 12:11:40.222698    5059 start.go:360] acquireMachinesLock for calico-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:40.223282    5059 start.go:364] duration metric: took 452.667µs to acquireMachinesLock for "calico-693000"
	I0731 12:11:40.223438    5059 start.go:93] Provisioning new machine with config: &{Name:calico-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:40.223757    5059 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:40.232455    5059 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:40.276998    5059 start.go:159] libmachine.API.Create for "calico-693000" (driver="qemu2")
	I0731 12:11:40.277065    5059 client.go:168] LocalClient.Create starting
	I0731 12:11:40.277194    5059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:40.277262    5059 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:40.277277    5059 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:40.277345    5059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:40.277389    5059 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:40.277408    5059 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:40.277950    5059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:40.439103    5059 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:40.594048    5059 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:40.594058    5059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:40.594318    5059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:40.604105    5059 main.go:141] libmachine: STDOUT: 
	I0731 12:11:40.604122    5059 main.go:141] libmachine: STDERR: 
	I0731 12:11:40.604172    5059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2 +20000M
	I0731 12:11:40.612046    5059 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:40.612060    5059 main.go:141] libmachine: STDERR: 
	I0731 12:11:40.612076    5059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:40.612081    5059 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:40.612092    5059 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:40.612120    5059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:54:47:61:b7:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/calico-693000/disk.qcow2
	I0731 12:11:40.613734    5059 main.go:141] libmachine: STDOUT: 
	I0731 12:11:40.613750    5059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:40.613761    5059 client.go:171] duration metric: took 336.697375ms to LocalClient.Create
	I0731 12:11:42.616045    5059 start.go:128] duration metric: took 2.392285291s to createHost
	I0731 12:11:42.616122    5059 start.go:83] releasing machines lock for "calico-693000", held for 2.392847s
	W0731 12:11:42.616464    5059 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:42.626073    5059 out.go:177] 
	W0731 12:11:42.629237    5059 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:11:42.629303    5059 out.go:239] * 
	* 
	W0731 12:11:42.631668    5059 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:11:42.642091    5059 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
E0731 12:11:45.936720    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.823604917s)

                                                
                                                
-- stdout --
	* [custom-flannel-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-693000" primary control-plane node in "custom-flannel-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:11:45.062274    5180 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:11:45.062420    5180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:45.062423    5180 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:45.062426    5180 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:45.062544    5180 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:11:45.063760    5180 out.go:298] Setting JSON to false
	I0731 12:11:45.080126    5180 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4274,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:11:45.080209    5180 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:11:45.086920    5180 out.go:177] * [custom-flannel-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:11:45.093815    5180 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:11:45.093960    5180 notify.go:220] Checking for updates...
	I0731 12:11:45.101747    5180 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:11:45.104875    5180 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:11:45.108809    5180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:11:45.111868    5180 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:11:45.114802    5180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:11:45.118149    5180 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:11:45.118214    5180 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:11:45.118264    5180 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:11:45.122784    5180 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:11:45.129855    5180 start.go:297] selected driver: qemu2
	I0731 12:11:45.129862    5180 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:11:45.129871    5180 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:11:45.132227    5180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:11:45.135699    5180 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:11:45.138844    5180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:11:45.138868    5180 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0731 12:11:45.138881    5180 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0731 12:11:45.138922    5180 start.go:340] cluster config:
	{Name:custom-flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:11:45.142506    5180 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:45.149775    5180 out.go:177] * Starting "custom-flannel-693000" primary control-plane node in "custom-flannel-693000" cluster
	I0731 12:11:45.153861    5180 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:11:45.153878    5180 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:11:45.153891    5180 cache.go:56] Caching tarball of preloaded images
	I0731 12:11:45.153947    5180 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:45.153953    5180 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:11:45.154024    5180 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/custom-flannel-693000/config.json ...
	I0731 12:11:45.154034    5180 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/custom-flannel-693000/config.json: {Name:mk334387c1643344b6fe32822cb2676d7e27e7ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:45.154401    5180 start.go:360] acquireMachinesLock for custom-flannel-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:45.154444    5180 start.go:364] duration metric: took 31.625µs to acquireMachinesLock for "custom-flannel-693000"
	I0731 12:11:45.154456    5180 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:45.154483    5180 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:45.162781    5180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:45.178232    5180 start.go:159] libmachine.API.Create for "custom-flannel-693000" (driver="qemu2")
	I0731 12:11:45.178253    5180 client.go:168] LocalClient.Create starting
	I0731 12:11:45.178325    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:45.178365    5180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:45.178374    5180 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:45.178421    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:45.178450    5180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:45.178456    5180 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:45.178778    5180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:45.331723    5180 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:45.436154    5180 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:45.436166    5180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:45.436401    5180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:45.445630    5180 main.go:141] libmachine: STDOUT: 
	I0731 12:11:45.445652    5180 main.go:141] libmachine: STDERR: 
	I0731 12:11:45.445713    5180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2 +20000M
	I0731 12:11:45.453596    5180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:45.453612    5180 main.go:141] libmachine: STDERR: 
	I0731 12:11:45.453637    5180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:45.453641    5180 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:45.453652    5180 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:45.453676    5180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d9:e3:35:c2:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:45.455313    5180 main.go:141] libmachine: STDOUT: 
	I0731 12:11:45.455327    5180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:45.455347    5180 client.go:171] duration metric: took 277.095208ms to LocalClient.Create
	I0731 12:11:47.457464    5180 start.go:128] duration metric: took 2.303003709s to createHost
	I0731 12:11:47.457495    5180 start.go:83] releasing machines lock for "custom-flannel-693000", held for 2.303081334s
	W0731 12:11:47.457560    5180 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:47.463866    5180 out.go:177] * Deleting "custom-flannel-693000" in qemu2 ...
	W0731 12:11:47.487272    5180 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:47.487286    5180 start.go:729] Will try again in 5 seconds ...
	I0731 12:11:52.489428    5180 start.go:360] acquireMachinesLock for custom-flannel-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:52.490035    5180 start.go:364] duration metric: took 512.25µs to acquireMachinesLock for "custom-flannel-693000"
	I0731 12:11:52.490119    5180 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:52.490496    5180 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:52.496343    5180 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:52.544998    5180 start.go:159] libmachine.API.Create for "custom-flannel-693000" (driver="qemu2")
	I0731 12:11:52.545061    5180 client.go:168] LocalClient.Create starting
	I0731 12:11:52.545181    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:52.545246    5180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:52.545266    5180 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:52.545328    5180 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:52.545374    5180 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:52.545387    5180 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:52.545957    5180 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:52.709043    5180 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:52.800005    5180 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:52.800011    5180 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:52.800234    5180 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:52.809614    5180 main.go:141] libmachine: STDOUT: 
	I0731 12:11:52.809641    5180 main.go:141] libmachine: STDERR: 
	I0731 12:11:52.809692    5180 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2 +20000M
	I0731 12:11:52.817663    5180 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:52.817676    5180 main.go:141] libmachine: STDERR: 
	I0731 12:11:52.817686    5180 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:52.817692    5180 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:52.817701    5180 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:52.817737    5180 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f6:f9:d9:61:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/custom-flannel-693000/disk.qcow2
	I0731 12:11:52.819501    5180 main.go:141] libmachine: STDOUT: 
	I0731 12:11:52.819512    5180 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:52.819526    5180 client.go:171] duration metric: took 274.465166ms to LocalClient.Create
	I0731 12:11:54.821648    5180 start.go:128] duration metric: took 2.331168375s to createHost
	I0731 12:11:54.821679    5180 start.go:83] releasing machines lock for "custom-flannel-693000", held for 2.33165175s
	W0731 12:11:54.821805    5180 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:54.833055    5180 out.go:177] 
	W0731 12:11:54.837227    5180 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:11:54.837241    5180 out.go:239] * 
	* 
	W0731 12:11:54.837917    5180 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:11:54.850170    5180 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.748027125s)

                                                
                                                
-- stdout --
	* [false-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-693000" primary control-plane node in "false-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:11:57.244310    5297 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:11:57.244436    5297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:57.244439    5297 out.go:304] Setting ErrFile to fd 2...
	I0731 12:11:57.244441    5297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:11:57.244578    5297 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:11:57.245631    5297 out.go:298] Setting JSON to false
	I0731 12:11:57.262568    5297 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4286,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:11:57.262648    5297 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:11:57.269343    5297 out.go:177] * [false-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:11:57.277287    5297 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:11:57.277313    5297 notify.go:220] Checking for updates...
	I0731 12:11:57.284250    5297 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:11:57.287314    5297 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:11:57.291308    5297 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:11:57.294290    5297 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:11:57.297249    5297 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:11:57.300660    5297 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:11:57.300732    5297 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:11:57.300781    5297 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:11:57.304209    5297 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:11:57.311344    5297 start.go:297] selected driver: qemu2
	I0731 12:11:57.311350    5297 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:11:57.311366    5297 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:11:57.313612    5297 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:11:57.318205    5297 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:11:57.321360    5297 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:11:57.321388    5297 cni.go:84] Creating CNI manager for "false"
	I0731 12:11:57.321417    5297 start.go:340] cluster config:
	{Name:false-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:11:57.325073    5297 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:11:57.333296    5297 out.go:177] * Starting "false-693000" primary control-plane node in "false-693000" cluster
	I0731 12:11:57.337228    5297 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:11:57.337241    5297 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:11:57.337252    5297 cache.go:56] Caching tarball of preloaded images
	I0731 12:11:57.337301    5297 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:11:57.337306    5297 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:11:57.337361    5297 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/false-693000/config.json ...
	I0731 12:11:57.337371    5297 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/false-693000/config.json: {Name:mk6ba7aad21268b150eee1ce0d87929ad78c1c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:11:57.337577    5297 start.go:360] acquireMachinesLock for false-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:11:57.337610    5297 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "false-693000"
	I0731 12:11:57.337620    5297 start.go:93] Provisioning new machine with config: &{Name:false-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:11:57.337646    5297 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:11:57.346376    5297 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:11:57.363270    5297 start.go:159] libmachine.API.Create for "false-693000" (driver="qemu2")
	I0731 12:11:57.363302    5297 client.go:168] LocalClient.Create starting
	I0731 12:11:57.363366    5297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:11:57.363404    5297 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:57.363413    5297 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:57.363461    5297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:11:57.363486    5297 main.go:141] libmachine: Decoding PEM data...
	I0731 12:11:57.363497    5297 main.go:141] libmachine: Parsing certificate...
	I0731 12:11:57.363881    5297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:11:57.517101    5297 main.go:141] libmachine: Creating SSH key...
	I0731 12:11:57.556992    5297 main.go:141] libmachine: Creating Disk image...
	I0731 12:11:57.556998    5297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:11:57.557213    5297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:11:57.566313    5297 main.go:141] libmachine: STDOUT: 
	I0731 12:11:57.566332    5297 main.go:141] libmachine: STDERR: 
	I0731 12:11:57.566380    5297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2 +20000M
	I0731 12:11:57.574366    5297 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:11:57.574380    5297 main.go:141] libmachine: STDERR: 
	I0731 12:11:57.574398    5297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:11:57.574403    5297 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:11:57.574413    5297 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:11:57.574437    5297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:60:b0:f0:05:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:11:57.576138    5297 main.go:141] libmachine: STDOUT: 
	I0731 12:11:57.576152    5297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:11:57.576171    5297 client.go:171] duration metric: took 212.869ms to LocalClient.Create
	I0731 12:11:59.578435    5297 start.go:128] duration metric: took 2.24079525s to createHost
	I0731 12:11:59.578541    5297 start.go:83] releasing machines lock for "false-693000", held for 2.240956625s
	W0731 12:11:59.578654    5297 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:59.594032    5297 out.go:177] * Deleting "false-693000" in qemu2 ...
	W0731 12:11:59.619349    5297 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:11:59.619382    5297 start.go:729] Will try again in 5 seconds ...
	I0731 12:12:04.621081    5297 start.go:360] acquireMachinesLock for false-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:04.621720    5297 start.go:364] duration metric: took 465.291µs to acquireMachinesLock for "false-693000"
	I0731 12:12:04.621830    5297 start.go:93] Provisioning new machine with config: &{Name:false-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:04.622039    5297 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:04.626681    5297 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:04.669774    5297 start.go:159] libmachine.API.Create for "false-693000" (driver="qemu2")
	I0731 12:12:04.669825    5297 client.go:168] LocalClient.Create starting
	I0731 12:12:04.669968    5297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:04.670033    5297 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:04.670049    5297 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:04.670107    5297 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:04.670151    5297 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:04.670167    5297 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:04.670686    5297 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:04.829621    5297 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:04.904905    5297 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:04.904912    5297 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:04.905137    5297 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:12:04.914232    5297 main.go:141] libmachine: STDOUT: 
	I0731 12:12:04.914250    5297 main.go:141] libmachine: STDERR: 
	I0731 12:12:04.914317    5297 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2 +20000M
	I0731 12:12:04.922794    5297 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:04.922811    5297 main.go:141] libmachine: STDERR: 
	I0731 12:12:04.922825    5297 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:12:04.922830    5297 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:04.922841    5297 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:04.922871    5297 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:74:16:69:b8:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/false-693000/disk.qcow2
	I0731 12:12:04.924665    5297 main.go:141] libmachine: STDOUT: 
	I0731 12:12:04.924681    5297 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:04.924693    5297 client.go:171] duration metric: took 254.867375ms to LocalClient.Create
	I0731 12:12:06.926734    5297 start.go:128] duration metric: took 2.304681834s to createHost
	I0731 12:12:06.926774    5297 start.go:83] releasing machines lock for "false-693000", held for 2.305069875s
	W0731 12:12:06.926885    5297 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:06.938071    5297 out.go:177] 
	W0731 12:12:06.942081    5297 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:12:06.942087    5297 out.go:239] * 
	* 
	W0731 12:12:06.942643    5297 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:06.955035    5297 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.940307875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-693000" primary control-plane node in "enable-default-cni-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:09.103943    5408 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:12:09.104086    5408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:09.104089    5408 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:09.104092    5408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:09.104236    5408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:12:09.105251    5408 out.go:298] Setting JSON to false
	I0731 12:12:09.121552    5408 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4298,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:12:09.121624    5408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:12:09.128884    5408 out.go:177] * [enable-default-cni-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:12:09.137672    5408 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:12:09.137718    5408 notify.go:220] Checking for updates...
	I0731 12:12:09.144603    5408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:12:09.147683    5408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:12:09.150635    5408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:12:09.153616    5408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:12:09.156607    5408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:12:09.159990    5408 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:12:09.160066    5408 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:12:09.160117    5408 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:12:09.164577    5408 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:12:09.171633    5408 start.go:297] selected driver: qemu2
	I0731 12:12:09.171639    5408 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:12:09.171644    5408 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:12:09.173733    5408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:12:09.176682    5408 out.go:177] * Automatically selected the socket_vmnet network
	E0731 12:12:09.179722    5408 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0731 12:12:09.179738    5408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:12:09.179776    5408 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:12:09.179780    5408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:12:09.179813    5408 start.go:340] cluster config:
	{Name:enable-default-cni-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:12:09.183547    5408 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:12:09.191670    5408 out.go:177] * Starting "enable-default-cni-693000" primary control-plane node in "enable-default-cni-693000" cluster
	I0731 12:12:09.195688    5408 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:12:09.195704    5408 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:12:09.195716    5408 cache.go:56] Caching tarball of preloaded images
	I0731 12:12:09.195783    5408 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:12:09.195788    5408 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:12:09.195853    5408 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/enable-default-cni-693000/config.json ...
	I0731 12:12:09.195864    5408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/enable-default-cni-693000/config.json: {Name:mkd99c7c94d23bc30b794b8573c8b7385a6cc943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:12:09.196068    5408 start.go:360] acquireMachinesLock for enable-default-cni-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:09.196102    5408 start.go:364] duration metric: took 25.792µs to acquireMachinesLock for "enable-default-cni-693000"
	I0731 12:12:09.196112    5408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:09.196138    5408 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:09.203590    5408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:09.220552    5408 start.go:159] libmachine.API.Create for "enable-default-cni-693000" (driver="qemu2")
	I0731 12:12:09.220581    5408 client.go:168] LocalClient.Create starting
	I0731 12:12:09.220646    5408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:09.220676    5408 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:09.220685    5408 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:09.220725    5408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:09.220751    5408 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:09.220757    5408 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:09.221129    5408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:09.374884    5408 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:09.414914    5408 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:09.414920    5408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:09.415162    5408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:09.424828    5408 main.go:141] libmachine: STDOUT: 
	I0731 12:12:09.424854    5408 main.go:141] libmachine: STDERR: 
	I0731 12:12:09.424930    5408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2 +20000M
	I0731 12:12:09.433114    5408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:09.433130    5408 main.go:141] libmachine: STDERR: 
	I0731 12:12:09.433156    5408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:09.433167    5408 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:09.433183    5408 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:09.433208    5408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:00:f3:13:9a:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:09.434835    5408 main.go:141] libmachine: STDOUT: 
	I0731 12:12:09.434852    5408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:09.434871    5408 client.go:171] duration metric: took 214.2885ms to LocalClient.Create
	I0731 12:12:11.436951    5408 start.go:128] duration metric: took 2.240836083s to createHost
	I0731 12:12:11.437018    5408 start.go:83] releasing machines lock for "enable-default-cni-693000", held for 2.240945917s
	W0731 12:12:11.437061    5408 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:11.453367    5408 out.go:177] * Deleting "enable-default-cni-693000" in qemu2 ...
	W0731 12:12:11.473554    5408 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:11.473566    5408 start.go:729] Will try again in 5 seconds ...
	I0731 12:12:16.475746    5408 start.go:360] acquireMachinesLock for enable-default-cni-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:16.476347    5408 start.go:364] duration metric: took 427.208µs to acquireMachinesLock for "enable-default-cni-693000"
	I0731 12:12:16.476527    5408 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:16.476800    5408 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:16.484435    5408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:16.534899    5408 start.go:159] libmachine.API.Create for "enable-default-cni-693000" (driver="qemu2")
	I0731 12:12:16.534952    5408 client.go:168] LocalClient.Create starting
	I0731 12:12:16.535069    5408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:16.535134    5408 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:16.535148    5408 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:16.535220    5408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:16.535264    5408 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:16.535275    5408 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:16.535835    5408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:16.705999    5408 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:16.952659    5408 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:16.952673    5408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:16.952921    5408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:16.962630    5408 main.go:141] libmachine: STDOUT: 
	I0731 12:12:16.962653    5408 main.go:141] libmachine: STDERR: 
	I0731 12:12:16.962713    5408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2 +20000M
	I0731 12:12:16.970725    5408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:16.970743    5408 main.go:141] libmachine: STDERR: 
	I0731 12:12:16.970754    5408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:16.970762    5408 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:16.970772    5408 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:16.970796    5408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:9b:6a:e9:f2:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/enable-default-cni-693000/disk.qcow2
	I0731 12:12:16.972462    5408 main.go:141] libmachine: STDOUT: 
	I0731 12:12:16.972477    5408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:16.972490    5408 client.go:171] duration metric: took 437.540334ms to LocalClient.Create
	I0731 12:12:18.974634    5408 start.go:128] duration metric: took 2.497845333s to createHost
	I0731 12:12:18.974736    5408 start.go:83] releasing machines lock for "enable-default-cni-693000", held for 2.498405625s
	W0731 12:12:18.975036    5408 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:18.990626    5408 out.go:177] 
	W0731 12:12:18.995694    5408 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:12:18.995730    5408 out.go:239] * 
	* 
	W0731 12:12:18.996871    5408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:19.007571    5408 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.846506458s)

                                                
                                                
-- stdout --
	* [flannel-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-693000" primary control-plane node in "flannel-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:21.166223    5517 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:12:21.166393    5517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:21.166397    5517 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:21.166399    5517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:21.166542    5517 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:12:21.167667    5517 out.go:298] Setting JSON to false
	I0731 12:12:21.183962    5517 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4310,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:12:21.184076    5517 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:12:21.191444    5517 out.go:177] * [flannel-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:12:21.198371    5517 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:12:21.198435    5517 notify.go:220] Checking for updates...
	I0731 12:12:21.207400    5517 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:12:21.210485    5517 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:12:21.213460    5517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:12:21.216462    5517 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:12:21.219432    5517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:12:21.222783    5517 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:12:21.222865    5517 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:12:21.222915    5517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:12:21.226447    5517 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:12:21.233406    5517 start.go:297] selected driver: qemu2
	I0731 12:12:21.233413    5517 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:12:21.233421    5517 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:12:21.235631    5517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:12:21.239367    5517 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:12:21.242444    5517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:12:21.242461    5517 cni.go:84] Creating CNI manager for "flannel"
	I0731 12:12:21.242466    5517 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0731 12:12:21.242504    5517 start.go:340] cluster config:
	{Name:flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:12:21.245770    5517 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:12:21.253290    5517 out.go:177] * Starting "flannel-693000" primary control-plane node in "flannel-693000" cluster
	I0731 12:12:21.257407    5517 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:12:21.257420    5517 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:12:21.257429    5517 cache.go:56] Caching tarball of preloaded images
	I0731 12:12:21.257486    5517 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:12:21.257491    5517 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:12:21.257555    5517 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/flannel-693000/config.json ...
	I0731 12:12:21.257571    5517 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/flannel-693000/config.json: {Name:mk73d701584e0dd6ef26990ee6851927f4b7fc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:12:21.257776    5517 start.go:360] acquireMachinesLock for flannel-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:21.257812    5517 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "flannel-693000"
	I0731 12:12:21.257821    5517 start.go:93] Provisioning new machine with config: &{Name:flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:21.257851    5517 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:21.265430    5517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:21.280595    5517 start.go:159] libmachine.API.Create for "flannel-693000" (driver="qemu2")
	I0731 12:12:21.280705    5517 client.go:168] LocalClient.Create starting
	I0731 12:12:21.280771    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:21.280800    5517 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:21.280817    5517 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:21.280854    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:21.280878    5517 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:21.280890    5517 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:21.281215    5517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:21.434322    5517 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:21.493963    5517 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:21.493968    5517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:21.494183    5517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:21.503466    5517 main.go:141] libmachine: STDOUT: 
	I0731 12:12:21.503482    5517 main.go:141] libmachine: STDERR: 
	I0731 12:12:21.503536    5517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2 +20000M
	I0731 12:12:21.511603    5517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:21.511617    5517 main.go:141] libmachine: STDERR: 
	I0731 12:12:21.511635    5517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:21.511639    5517 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:21.511651    5517 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:21.511688    5517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f3:eb:d8:8a:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:21.513354    5517 main.go:141] libmachine: STDOUT: 
	I0731 12:12:21.513367    5517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:21.513383    5517 client.go:171] duration metric: took 232.67625ms to LocalClient.Create
	I0731 12:12:23.515723    5517 start.go:128] duration metric: took 2.257844541s to createHost
	I0731 12:12:23.515832    5517 start.go:83] releasing machines lock for "flannel-693000", held for 2.258048083s
	W0731 12:12:23.515904    5517 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:23.528778    5517 out.go:177] * Deleting "flannel-693000" in qemu2 ...
	W0731 12:12:23.555736    5517 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:23.555768    5517 start.go:729] Will try again in 5 seconds ...
	I0731 12:12:28.557945    5517 start.go:360] acquireMachinesLock for flannel-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:28.558484    5517 start.go:364] duration metric: took 414.334µs to acquireMachinesLock for "flannel-693000"
	I0731 12:12:28.558630    5517 start.go:93] Provisioning new machine with config: &{Name:flannel-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:28.558964    5517 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:28.568703    5517 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:28.614989    5517 start.go:159] libmachine.API.Create for "flannel-693000" (driver="qemu2")
	I0731 12:12:28.615169    5517 client.go:168] LocalClient.Create starting
	I0731 12:12:28.615301    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:28.615366    5517 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:28.615384    5517 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:28.615450    5517 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:28.615496    5517 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:28.615508    5517 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:28.616015    5517 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:28.776472    5517 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:28.916327    5517 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:28.916336    5517 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:28.916592    5517 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:28.926218    5517 main.go:141] libmachine: STDOUT: 
	I0731 12:12:28.926235    5517 main.go:141] libmachine: STDERR: 
	I0731 12:12:28.926282    5517 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2 +20000M
	I0731 12:12:28.934277    5517 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:28.934301    5517 main.go:141] libmachine: STDERR: 
	I0731 12:12:28.934311    5517 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:28.934317    5517 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:28.934326    5517 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:28.934351    5517 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:99:64:db:a0:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/flannel-693000/disk.qcow2
	I0731 12:12:28.936033    5517 main.go:141] libmachine: STDOUT: 
	I0731 12:12:28.936049    5517 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:28.936061    5517 client.go:171] duration metric: took 320.891917ms to LocalClient.Create
	I0731 12:12:30.938256    5517 start.go:128] duration metric: took 2.379289625s to createHost
	I0731 12:12:30.938333    5517 start.go:83] releasing machines lock for "flannel-693000", held for 2.379863833s
	W0731 12:12:30.938738    5517 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:30.950912    5517 out.go:177] 
	W0731 12:12:30.955088    5517 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:12:30.955117    5517 out.go:239] * 
	* 
	W0731 12:12:30.959868    5517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:30.970057    5517 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.797915333s)

                                                
                                                
-- stdout --
	* [bridge-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-693000" primary control-plane node in "bridge-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:33.359221    5634 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:12:33.359357    5634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:33.359360    5634 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:33.359362    5634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:33.359509    5634 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:12:33.360671    5634 out.go:298] Setting JSON to false
	I0731 12:12:33.377247    5634 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4322,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:12:33.377334    5634 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:12:33.382694    5634 out.go:177] * [bridge-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:12:33.389607    5634 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:12:33.389670    5634 notify.go:220] Checking for updates...
	I0731 12:12:33.396575    5634 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:12:33.399591    5634 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:12:33.403608    5634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:12:33.406595    5634 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:12:33.409614    5634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:12:33.412982    5634 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:12:33.413048    5634 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:12:33.413097    5634 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:12:33.416529    5634 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:12:33.423612    5634 start.go:297] selected driver: qemu2
	I0731 12:12:33.423620    5634 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:12:33.423626    5634 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:12:33.425884    5634 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:12:33.430592    5634 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:12:33.433680    5634 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:12:33.433721    5634 cni.go:84] Creating CNI manager for "bridge"
	I0731 12:12:33.433725    5634 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:12:33.433773    5634 start.go:340] cluster config:
	{Name:bridge-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:12:33.437400    5634 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:12:33.445514    5634 out.go:177] * Starting "bridge-693000" primary control-plane node in "bridge-693000" cluster
	I0731 12:12:33.449639    5634 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:12:33.449661    5634 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:12:33.449672    5634 cache.go:56] Caching tarball of preloaded images
	I0731 12:12:33.449727    5634 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:12:33.449732    5634 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:12:33.449783    5634 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/bridge-693000/config.json ...
	I0731 12:12:33.449793    5634 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/bridge-693000/config.json: {Name:mk44bf02e9b3e205e64eb43dfdac1b2610122136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:12:33.450000    5634 start.go:360] acquireMachinesLock for bridge-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:33.450031    5634 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "bridge-693000"
	I0731 12:12:33.450040    5634 start.go:93] Provisioning new machine with config: &{Name:bridge-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:33.450074    5634 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:33.461567    5634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:33.478251    5634 start.go:159] libmachine.API.Create for "bridge-693000" (driver="qemu2")
	I0731 12:12:33.478280    5634 client.go:168] LocalClient.Create starting
	I0731 12:12:33.478335    5634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:33.478368    5634 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:33.478378    5634 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:33.478415    5634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:33.478444    5634 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:33.478453    5634 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:33.478796    5634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:33.632087    5634 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:33.754267    5634 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:33.754273    5634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:33.754508    5634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:33.764303    5634 main.go:141] libmachine: STDOUT: 
	I0731 12:12:33.764325    5634 main.go:141] libmachine: STDERR: 
	I0731 12:12:33.764387    5634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2 +20000M
	I0731 12:12:33.772725    5634 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:33.772746    5634 main.go:141] libmachine: STDERR: 
	I0731 12:12:33.772765    5634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:33.772771    5634 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:33.772782    5634 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:33.772810    5634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5c:38:d1:45:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:33.774519    5634 main.go:141] libmachine: STDOUT: 
	I0731 12:12:33.774536    5634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:33.774554    5634 client.go:171] duration metric: took 296.273875ms to LocalClient.Create
	I0731 12:12:35.776632    5634 start.go:128] duration metric: took 2.326581458s to createHost
	I0731 12:12:35.776712    5634 start.go:83] releasing machines lock for "bridge-693000", held for 2.326711s
	W0731 12:12:35.776759    5634 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:35.786923    5634 out.go:177] * Deleting "bridge-693000" in qemu2 ...
	W0731 12:12:35.808094    5634 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:35.808108    5634 start.go:729] Will try again in 5 seconds ...
	I0731 12:12:40.810221    5634 start.go:360] acquireMachinesLock for bridge-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:40.810719    5634 start.go:364] duration metric: took 381.166µs to acquireMachinesLock for "bridge-693000"
	I0731 12:12:40.811310    5634 start.go:93] Provisioning new machine with config: &{Name:bridge-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:40.811479    5634 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:40.818893    5634 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:40.856974    5634 start.go:159] libmachine.API.Create for "bridge-693000" (driver="qemu2")
	I0731 12:12:40.857026    5634 client.go:168] LocalClient.Create starting
	I0731 12:12:40.857146    5634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:40.857214    5634 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:40.857231    5634 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:40.857310    5634 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:40.857353    5634 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:40.857366    5634 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:40.857955    5634 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:41.017047    5634 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:41.075150    5634 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:41.075155    5634 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:41.075381    5634 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:41.084637    5634 main.go:141] libmachine: STDOUT: 
	I0731 12:12:41.084725    5634 main.go:141] libmachine: STDERR: 
	I0731 12:12:41.084775    5634 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2 +20000M
	I0731 12:12:41.092830    5634 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:41.092885    5634 main.go:141] libmachine: STDERR: 
	I0731 12:12:41.092906    5634 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:41.092910    5634 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:41.092919    5634 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:41.092944    5634 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:14:54:3e:ee:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/bridge-693000/disk.qcow2
	I0731 12:12:41.094606    5634 main.go:141] libmachine: STDOUT: 
	I0731 12:12:41.094660    5634 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:41.094677    5634 client.go:171] duration metric: took 237.650708ms to LocalClient.Create
	I0731 12:12:43.096739    5634 start.go:128] duration metric: took 2.285286041s to createHost
	I0731 12:12:43.096752    5634 start.go:83] releasing machines lock for "bridge-693000", held for 2.28604425s
	W0731 12:12:43.096834    5634 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:43.101096    5634 out.go:177] 
	W0731 12:12:43.106027    5634 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:12:43.106042    5634 out.go:239] * 
	* 
	W0731 12:12:43.106488    5634 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:43.117130    5634 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-693000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.862899166s)

                                                
                                                
-- stdout --
	* [kubenet-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-693000" primary control-plane node in "kubenet-693000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:45.268289    5743 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:12:45.268429    5743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:45.268433    5743 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:45.268435    5743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:45.268558    5743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:12:45.269807    5743 out.go:298] Setting JSON to false
	I0731 12:12:45.287254    5743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4334,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:12:45.287342    5743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:12:45.292140    5743 out.go:177] * [kubenet-693000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:12:45.298915    5743 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:12:45.298986    5743 notify.go:220] Checking for updates...
	I0731 12:12:45.305844    5743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:12:45.308932    5743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:12:45.312983    5743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:12:45.314381    5743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:12:45.316979    5743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:12:45.320379    5743 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:12:45.320447    5743 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:12:45.320483    5743 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:12:45.322091    5743 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:12:45.329034    5743 start.go:297] selected driver: qemu2
	I0731 12:12:45.329041    5743 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:12:45.329048    5743 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:12:45.331413    5743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:12:45.335806    5743 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:12:45.339090    5743 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:12:45.339114    5743 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0731 12:12:45.339146    5743 start.go:340] cluster config:
	{Name:kubenet-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:12:45.342838    5743 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:12:45.351019    5743 out.go:177] * Starting "kubenet-693000" primary control-plane node in "kubenet-693000" cluster
	I0731 12:12:45.354953    5743 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:12:45.354976    5743 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:12:45.354991    5743 cache.go:56] Caching tarball of preloaded images
	I0731 12:12:45.355054    5743 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:12:45.355060    5743 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:12:45.355123    5743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kubenet-693000/config.json ...
	I0731 12:12:45.355135    5743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/kubenet-693000/config.json: {Name:mk1c6c41b2bc56d9a5cbc025a0bc69d2b65287fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:12:45.355355    5743 start.go:360] acquireMachinesLock for kubenet-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:45.355389    5743 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "kubenet-693000"
	I0731 12:12:45.355404    5743 start.go:93] Provisioning new machine with config: &{Name:kubenet-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:45.355429    5743 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:45.364016    5743 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:45.381095    5743 start.go:159] libmachine.API.Create for "kubenet-693000" (driver="qemu2")
	I0731 12:12:45.381129    5743 client.go:168] LocalClient.Create starting
	I0731 12:12:45.381198    5743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:45.381228    5743 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:45.381238    5743 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:45.381284    5743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:45.381311    5743 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:45.381321    5743 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:45.381666    5743 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:45.534069    5743 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:45.660621    5743 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:45.660627    5743 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:45.660857    5743 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:45.670467    5743 main.go:141] libmachine: STDOUT: 
	I0731 12:12:45.670485    5743 main.go:141] libmachine: STDERR: 
	I0731 12:12:45.670532    5743 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2 +20000M
	I0731 12:12:45.678476    5743 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:45.678489    5743 main.go:141] libmachine: STDERR: 
	I0731 12:12:45.678507    5743 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:45.678513    5743 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:45.678530    5743 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:45.678556    5743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:6e:ae:d0:30:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:45.680223    5743 main.go:141] libmachine: STDOUT: 
	I0731 12:12:45.680238    5743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:45.680257    5743 client.go:171] duration metric: took 299.127166ms to LocalClient.Create
	I0731 12:12:47.682382    5743 start.go:128] duration metric: took 2.326976875s to createHost
	I0731 12:12:47.682412    5743 start.go:83] releasing machines lock for "kubenet-693000", held for 2.327055209s
	W0731 12:12:47.682457    5743 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:47.691471    5743 out.go:177] * Deleting "kubenet-693000" in qemu2 ...
	W0731 12:12:47.712901    5743 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:47.712911    5743 start.go:729] Will try again in 5 seconds ...
	I0731 12:12:52.715084    5743 start.go:360] acquireMachinesLock for kubenet-693000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:52.715739    5743 start.go:364] duration metric: took 491.5µs to acquireMachinesLock for "kubenet-693000"
	I0731 12:12:52.715924    5743 start.go:93] Provisioning new machine with config: &{Name:kubenet-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-693000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:52.716198    5743 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:52.725822    5743 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0731 12:12:52.770437    5743 start.go:159] libmachine.API.Create for "kubenet-693000" (driver="qemu2")
	I0731 12:12:52.770490    5743 client.go:168] LocalClient.Create starting
	I0731 12:12:52.770613    5743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:52.770673    5743 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:52.770692    5743 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:52.770751    5743 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:52.770802    5743 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:52.770839    5743 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:52.771370    5743 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:52.932109    5743 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:53.039066    5743 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:53.039073    5743 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:53.039300    5743 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:53.048690    5743 main.go:141] libmachine: STDOUT: 
	I0731 12:12:53.048704    5743 main.go:141] libmachine: STDERR: 
	I0731 12:12:53.048751    5743 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2 +20000M
	I0731 12:12:53.056563    5743 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:53.056578    5743 main.go:141] libmachine: STDERR: 
	I0731 12:12:53.056591    5743 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:53.056595    5743 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:53.056608    5743 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:53.056652    5743 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:1a:ed:36:fb:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/kubenet-693000/disk.qcow2
	I0731 12:12:53.058458    5743 main.go:141] libmachine: STDOUT: 
	I0731 12:12:53.058472    5743 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:53.058492    5743 client.go:171] duration metric: took 287.99425ms to LocalClient.Create
	I0731 12:12:55.060665    5743 start.go:128] duration metric: took 2.344470625s to createHost
	I0731 12:12:55.060748    5743 start.go:83] releasing machines lock for "kubenet-693000", held for 2.344991667s
	W0731 12:12:55.061213    5743 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:55.075867    5743 out.go:177] 
	W0731 12:12:55.078862    5743 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:12:55.078880    5743 out.go:239] * 
	* 
	W0731 12:12:55.080842    5743 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:12:55.091799    5743 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.969949667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-195000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-195000" primary control-plane node in "old-k8s-version-195000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-195000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:12:57.276792    5854 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:12:57.276962    5854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:57.276966    5854 out.go:304] Setting ErrFile to fd 2...
	I0731 12:12:57.276968    5854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:12:57.277094    5854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:12:57.278115    5854 out.go:298] Setting JSON to false
	I0731 12:12:57.294951    5854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4346,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:12:57.295038    5854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:12:57.301108    5854 out.go:177] * [old-k8s-version-195000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:12:57.308055    5854 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:12:57.308096    5854 notify.go:220] Checking for updates...
	I0731 12:12:57.315120    5854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:12:57.318079    5854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:12:57.321147    5854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:12:57.324161    5854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:12:57.327160    5854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:12:57.330504    5854 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:12:57.330575    5854 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:12:57.330618    5854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:12:57.335208    5854 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:12:57.342074    5854 start.go:297] selected driver: qemu2
	I0731 12:12:57.342079    5854 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:12:57.342084    5854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:12:57.344429    5854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:12:57.348143    5854 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:12:57.351108    5854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:12:57.351140    5854 cni.go:84] Creating CNI manager for ""
	I0731 12:12:57.351147    5854 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:12:57.351172    5854 start.go:340] cluster config:
	{Name:old-k8s-version-195000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:12:57.354810    5854 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:12:57.363169    5854 out.go:177] * Starting "old-k8s-version-195000" primary control-plane node in "old-k8s-version-195000" cluster
	I0731 12:12:57.367180    5854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:12:57.367195    5854 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:12:57.367206    5854 cache.go:56] Caching tarball of preloaded images
	I0731 12:12:57.367272    5854 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:12:57.367278    5854 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:12:57.367343    5854 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/old-k8s-version-195000/config.json ...
	I0731 12:12:57.367353    5854 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/old-k8s-version-195000/config.json: {Name:mkbf7f524655dd15c4728b0d2ddf10f584c03b9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:12:57.367561    5854 start.go:360] acquireMachinesLock for old-k8s-version-195000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:12:57.367594    5854 start.go:364] duration metric: took 25.166µs to acquireMachinesLock for "old-k8s-version-195000"
	I0731 12:12:57.367607    5854 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:12:57.367632    5854 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:12:57.375104    5854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:12:57.389824    5854 start.go:159] libmachine.API.Create for "old-k8s-version-195000" (driver="qemu2")
	I0731 12:12:57.389849    5854 client.go:168] LocalClient.Create starting
	I0731 12:12:57.389907    5854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:12:57.389938    5854 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:57.389946    5854 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:57.389982    5854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:12:57.390004    5854 main.go:141] libmachine: Decoding PEM data...
	I0731 12:12:57.390011    5854 main.go:141] libmachine: Parsing certificate...
	I0731 12:12:57.390389    5854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:12:57.545620    5854 main.go:141] libmachine: Creating SSH key...
	I0731 12:12:57.720417    5854 main.go:141] libmachine: Creating Disk image...
	I0731 12:12:57.720426    5854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:12:57.720665    5854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:12:57.730571    5854 main.go:141] libmachine: STDOUT: 
	I0731 12:12:57.730585    5854 main.go:141] libmachine: STDERR: 
	I0731 12:12:57.730631    5854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2 +20000M
	I0731 12:12:57.738746    5854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:12:57.738759    5854 main.go:141] libmachine: STDERR: 
	I0731 12:12:57.738778    5854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:12:57.738783    5854 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:12:57.738798    5854 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:12:57.738825    5854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:85:7f:a4:78:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:12:57.740504    5854 main.go:141] libmachine: STDOUT: 
	I0731 12:12:57.740518    5854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:12:57.740534    5854 client.go:171] duration metric: took 350.685834ms to LocalClient.Create
	I0731 12:12:59.742639    5854 start.go:128] duration metric: took 2.375030625s to createHost
	I0731 12:12:59.742693    5854 start.go:83] releasing machines lock for "old-k8s-version-195000", held for 2.375129625s
	W0731 12:12:59.742741    5854 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:59.750187    5854 out.go:177] * Deleting "old-k8s-version-195000" in qemu2 ...
	W0731 12:12:59.777225    5854 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:12:59.777240    5854 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:04.779479    5854 start.go:360] acquireMachinesLock for old-k8s-version-195000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:04.780079    5854 start.go:364] duration metric: took 474.959µs to acquireMachinesLock for "old-k8s-version-195000"
	I0731 12:13:04.780151    5854 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:04.780488    5854 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:04.787866    5854 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:04.838308    5854 start.go:159] libmachine.API.Create for "old-k8s-version-195000" (driver="qemu2")
	I0731 12:13:04.838364    5854 client.go:168] LocalClient.Create starting
	I0731 12:13:04.838474    5854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:04.838546    5854 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:04.838566    5854 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:04.838640    5854 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:04.838685    5854 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:04.838698    5854 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:04.839691    5854 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:05.003168    5854 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:05.153286    5854 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:05.153294    5854 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:05.153559    5854 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:13:05.163795    5854 main.go:141] libmachine: STDOUT: 
	I0731 12:13:05.163830    5854 main.go:141] libmachine: STDERR: 
	I0731 12:13:05.163895    5854 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2 +20000M
	I0731 12:13:05.172363    5854 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:05.172386    5854 main.go:141] libmachine: STDERR: 
	I0731 12:13:05.172399    5854 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:13:05.172404    5854 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:05.172411    5854 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:05.172438    5854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:14:17:28:02:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:13:05.174253    5854 main.go:141] libmachine: STDOUT: 
	I0731 12:13:05.174269    5854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:05.174282    5854 client.go:171] duration metric: took 335.918709ms to LocalClient.Create
	I0731 12:13:07.176424    5854 start.go:128] duration metric: took 2.395935667s to createHost
	I0731 12:13:07.176516    5854 start.go:83] releasing machines lock for "old-k8s-version-195000", held for 2.39645225s
	W0731 12:13:07.176802    5854 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-195000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-195000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:07.185230    5854 out.go:177] 
	W0731 12:13:07.192128    5854 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:07.192148    5854 out.go:239] * 
	* 
	W0731 12:13:07.193867    5854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:07.209202    5854 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (53.220208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-195000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-195000 create -f testdata/busybox.yaml: exit status 1 (32.14575ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-195000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-195000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.288459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (28.257375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-195000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-195000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-195000 describe deploy/metrics-server -n kube-system: exit status 1 (27.590625ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-195000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-195000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.19875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191387458s)

                                                
                                                
-- stdout --
	* [old-k8s-version-195000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-195000" primary control-plane node in "old-k8s-version-195000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-195000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-195000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:11.278142    5908 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:11.278277    5908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:11.278280    5908 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:11.278283    5908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:11.278402    5908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:11.279439    5908 out.go:298] Setting JSON to false
	I0731 12:13:11.295505    5908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4360,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:11.295581    5908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:11.300977    5908 out.go:177] * [old-k8s-version-195000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:11.308007    5908 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:11.308071    5908 notify.go:220] Checking for updates...
	I0731 12:13:11.314898    5908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:11.317921    5908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:11.320948    5908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:11.323979    5908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:11.326999    5908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:11.330228    5908 config.go:182] Loaded profile config "old-k8s-version-195000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:13:11.333906    5908 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 12:13:11.336926    5908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:11.339939    5908 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:13:11.346942    5908 start.go:297] selected driver: qemu2
	I0731 12:13:11.346949    5908 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:11.347019    5908 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:11.349236    5908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:11.349265    5908 cni.go:84] Creating CNI manager for ""
	I0731 12:13:11.349273    5908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 12:13:11.349292    5908 start.go:340] cluster config:
	{Name:old-k8s-version-195000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-195000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:11.352742    5908 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:11.361753    5908 out.go:177] * Starting "old-k8s-version-195000" primary control-plane node in "old-k8s-version-195000" cluster
	I0731 12:13:11.365930    5908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 12:13:11.365945    5908 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 12:13:11.365957    5908 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:11.366019    5908 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:11.366025    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 12:13:11.366083    5908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/old-k8s-version-195000/config.json ...
	I0731 12:13:11.366571    5908 start.go:360] acquireMachinesLock for old-k8s-version-195000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:11.366599    5908 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "old-k8s-version-195000"
	I0731 12:13:11.366607    5908 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:11.366613    5908 fix.go:54] fixHost starting: 
	I0731 12:13:11.366723    5908 fix.go:112] recreateIfNeeded on old-k8s-version-195000: state=Stopped err=<nil>
	W0731 12:13:11.366731    5908 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:11.369974    5908 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-195000" ...
	I0731 12:13:11.376931    5908 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:11.376984    5908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:14:17:28:02:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:13:11.378953    5908 main.go:141] libmachine: STDOUT: 
	I0731 12:13:11.378974    5908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:11.379001    5908 fix.go:56] duration metric: took 12.388834ms for fixHost
	I0731 12:13:11.379005    5908 start.go:83] releasing machines lock for "old-k8s-version-195000", held for 12.401542ms
	W0731 12:13:11.379013    5908 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:11.379051    5908 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:11.379056    5908 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:16.381169    5908 start.go:360] acquireMachinesLock for old-k8s-version-195000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:16.381640    5908 start.go:364] duration metric: took 369.625µs to acquireMachinesLock for "old-k8s-version-195000"
	I0731 12:13:16.381773    5908 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:16.381795    5908 fix.go:54] fixHost starting: 
	I0731 12:13:16.382503    5908 fix.go:112] recreateIfNeeded on old-k8s-version-195000: state=Stopped err=<nil>
	W0731 12:13:16.382530    5908 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:16.391309    5908 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-195000" ...
	I0731 12:13:16.395373    5908 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:16.395632    5908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:14:17:28:02:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/old-k8s-version-195000/disk.qcow2
	I0731 12:13:16.405120    5908 main.go:141] libmachine: STDOUT: 
	I0731 12:13:16.405175    5908 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:16.405248    5908 fix.go:56] duration metric: took 23.456125ms for fixHost
	I0731 12:13:16.405265    5908 start.go:83] releasing machines lock for "old-k8s-version-195000", held for 23.603792ms
	W0731 12:13:16.405495    5908 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-195000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-195000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:16.415397    5908 out.go:177] 
	W0731 12:13:16.419368    5908 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:16.419406    5908 out.go:239] * 
	* 
	W0731 12:13:16.420731    5908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:16.430279    5908 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-195000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (57.378709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-195000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (30.652709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-195000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-195000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-195000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.088042ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-195000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-195000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.021583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-195000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.40525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-195000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-195000 --alsologtostderr -v=1: exit status 83 (38.795292ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-195000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-195000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:16.686569    5931 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:16.686963    5931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:16.686967    5931 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:16.686969    5931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:16.687128    5931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:16.687343    5931 out.go:298] Setting JSON to false
	I0731 12:13:16.687348    5931 mustload.go:65] Loading cluster: old-k8s-version-195000
	I0731 12:13:16.687547    5931 config.go:182] Loaded profile config "old-k8s-version-195000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0731 12:13:16.691732    5931 out.go:177] * The control-plane node old-k8s-version-195000 host is not running: state=Stopped
	I0731 12:13:16.694620    5931 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-195000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-195000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.608375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (29.604667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-195000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.779322375s)

                                                
                                                
-- stdout --
	* [no-preload-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-762000" primary control-plane node in "no-preload-762000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-762000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:17.001164    5948 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:17.001298    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:17.001301    5948 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:17.001304    5948 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:17.001464    5948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:17.002542    5948 out.go:298] Setting JSON to false
	I0731 12:13:17.019139    5948 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4366,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:17.019207    5948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:17.022888    5948 out.go:177] * [no-preload-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:17.029858    5948 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:17.029906    5948 notify.go:220] Checking for updates...
	I0731 12:13:17.036814    5948 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:17.039829    5948 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:17.042745    5948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:17.045804    5948 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:17.048817    5948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:17.052175    5948 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:17.052243    5948 config.go:182] Loaded profile config "stopped-upgrade-532000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0731 12:13:17.052288    5948 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:17.056789    5948 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:13:17.063803    5948 start.go:297] selected driver: qemu2
	I0731 12:13:17.063810    5948 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:17.063815    5948 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:17.066203    5948 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:13:17.068820    5948 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:13:17.071859    5948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:17.071887    5948 cni.go:84] Creating CNI manager for ""
	I0731 12:13:17.071895    5948 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:17.071899    5948 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:13:17.071921    5948 start.go:340] cluster config:
	{Name:no-preload-762000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:17.075625    5948 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.084843    5948 out.go:177] * Starting "no-preload-762000" primary control-plane node in "no-preload-762000" cluster
	I0731 12:13:17.088665    5948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:13:17.088754    5948 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/no-preload-762000/config.json ...
	I0731 12:13:17.088772    5948 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/no-preload-762000/config.json: {Name:mk2f16bfa6d173a1cee7fcf89123eb1d328994f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:13:17.088770    5948 cache.go:107] acquiring lock: {Name:mkab61a9befdc8ee3aa9e3284d82f4b00197cb50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088789    5948 cache.go:107] acquiring lock: {Name:mk1d12ca53e45b3e8b9e16d35f7498ea0f4170fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088800    5948 cache.go:107] acquiring lock: {Name:mk5846a93176e83e4536114637fb0519d6d44fed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088805    5948 cache.go:107] acquiring lock: {Name:mke6fc33a2bb1cae441158ad564f9aa812858ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088862    5948 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:13:17.088872    5948 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 83.292µs
	I0731 12:13:17.088878    5948 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:13:17.088889    5948 cache.go:107] acquiring lock: {Name:mk8e426e5e446ce89007d7dcc3403fadcfe43f36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088995    5948 cache.go:107] acquiring lock: {Name:mk896e0d3a37aeee0b18ce883a25c3a4e496b894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.088998    5948 cache.go:107] acquiring lock: {Name:mka75fb82703b7c30de9c3b78f52259d1e26a533 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.089004    5948 cache.go:107] acquiring lock: {Name:mkfc5188565a7549bfc1616fb0832c6b8c146621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:17.089139    5948 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0731 12:13:17.089185    5948 start.go:360] acquireMachinesLock for no-preload-762000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:17.089196    5948 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:13:17.089197    5948 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:13:17.089216    5948 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:13:17.089208    5948 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:13:17.089225    5948 start.go:364] duration metric: took 34.291µs to acquireMachinesLock for "no-preload-762000"
	I0731 12:13:17.089258    5948 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:13:17.089292    5948 start.go:93] Provisioning new machine with config: &{Name:no-preload-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:17.089350    5948 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:17.089390    5948 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:13:17.097729    5948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:17.101406    5948 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0731 12:13:17.101989    5948 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0731 12:13:17.102163    5948 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0731 12:13:17.102263    5948 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0731 12:13:17.102338    5948 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0731 12:13:17.104137    5948 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0731 12:13:17.104313    5948 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0731 12:13:17.114877    5948 start.go:159] libmachine.API.Create for "no-preload-762000" (driver="qemu2")
	I0731 12:13:17.114900    5948 client.go:168] LocalClient.Create starting
	I0731 12:13:17.114971    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:17.115000    5948 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:17.115012    5948 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:17.115056    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:17.115079    5948 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:17.115085    5948 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:17.115530    5948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:17.276115    5948 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:17.332886    5948 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:17.332903    5948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:17.333166    5948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:17.342697    5948 main.go:141] libmachine: STDOUT: 
	I0731 12:13:17.342717    5948 main.go:141] libmachine: STDERR: 
	I0731 12:13:17.342773    5948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2 +20000M
	I0731 12:13:17.351585    5948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:17.351603    5948 main.go:141] libmachine: STDERR: 
	I0731 12:13:17.351616    5948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:17.351620    5948 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:17.351631    5948 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:17.351660    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:af:75:4c:bb:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:17.353519    5948 main.go:141] libmachine: STDOUT: 
	I0731 12:13:17.353536    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:17.353556    5948 client.go:171] duration metric: took 238.655417ms to LocalClient.Create
	I0731 12:13:17.485118    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0731 12:13:17.495962    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0731 12:13:17.510820    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0731 12:13:17.516758    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0731 12:13:17.520497    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0731 12:13:17.552631    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0731 12:13:17.578992    5948 cache.go:162] opening:  /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0731 12:13:17.664570    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:13:17.664600    5948 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 575.719291ms
	I0731 12:13:17.664611    5948 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:13:19.353748    5948 start.go:128] duration metric: took 2.26440125s to createHost
	I0731 12:13:19.353817    5948 start.go:83] releasing machines lock for "no-preload-762000", held for 2.264567917s
	W0731 12:13:19.353870    5948 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:19.366759    5948 out.go:177] * Deleting "no-preload-762000" in qemu2 ...
	W0731 12:13:19.388253    5948 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:19.388280    5948 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:20.393361    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:13:20.393395    5948 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 3.304660458s
	I0731 12:13:20.393412    5948 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:13:20.497855    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:13:20.497880    5948 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 3.409170208s
	I0731 12:13:20.497895    5948 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:13:20.674539    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:13:20.674558    5948 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.585637833s
	I0731 12:13:20.674566    5948 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:13:21.017346    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:13:21.017378    5948 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 3.928634292s
	I0731 12:13:21.017390    5948 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:13:21.162881    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:13:21.162919    5948 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 4.073998292s
	I0731 12:13:21.162936    5948 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:13:24.389277    5948 start.go:360] acquireMachinesLock for no-preload-762000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:24.389462    5948 start.go:364] duration metric: took 156µs to acquireMachinesLock for "no-preload-762000"
	I0731 12:13:24.389514    5948 start.go:93] Provisioning new machine with config: &{Name:no-preload-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:24.389586    5948 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:24.397882    5948 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:24.422403    5948 start.go:159] libmachine.API.Create for "no-preload-762000" (driver="qemu2")
	I0731 12:13:24.422434    5948 client.go:168] LocalClient.Create starting
	I0731 12:13:24.422507    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:24.422552    5948 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:24.422567    5948 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:24.422622    5948 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:24.422650    5948 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:24.422660    5948 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:24.423009    5948 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:24.578996    5948 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:24.691643    5948 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:24.691655    5948 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:24.691901    5948 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:24.701383    5948 main.go:141] libmachine: STDOUT: 
	I0731 12:13:24.701470    5948 main.go:141] libmachine: STDERR: 
	I0731 12:13:24.701516    5948 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2 +20000M
	I0731 12:13:24.709542    5948 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:24.709563    5948 main.go:141] libmachine: STDERR: 
	I0731 12:13:24.709573    5948 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:24.709583    5948 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:24.709592    5948 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:24.709639    5948 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:92:b0:d8:8b:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:24.709956    5948 cache.go:157] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:13:24.710147    5948 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 7.621283s
	I0731 12:13:24.710168    5948 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:13:24.710207    5948 cache.go:87] Successfully saved all images to host disk.
	I0731 12:13:24.711616    5948 main.go:141] libmachine: STDOUT: 
	I0731 12:13:24.711627    5948 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:24.711639    5948 client.go:171] duration metric: took 289.205875ms to LocalClient.Create
	I0731 12:13:26.712947    5948 start.go:128] duration metric: took 2.323379625s to createHost
	I0731 12:13:26.713001    5948 start.go:83] releasing machines lock for "no-preload-762000", held for 2.323561791s
	W0731 12:13:26.713166    5948 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-762000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-762000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:26.722806    5948 out.go:177] 
	W0731 12:13:26.730831    5948 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:26.730847    5948 out.go:239] * 
	* 
	W0731 12:13:26.731849    5948 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:26.743859    5948 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (39.862417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-762000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-762000 create -f testdata/busybox.yaml: exit status 1 (28.760833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-762000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-762000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (28.934375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (29.390167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-762000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-762000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-762000 describe deploy/metrics-server -n kube-system: exit status 1 (27.782541ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-762000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-762000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (29.133875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.190129s)

                                                
                                                
-- stdout --
	* [no-preload-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-762000" primary control-plane node in "no-preload-762000" cluster
	* Restarting existing qemu2 VM for "no-preload-762000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-762000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:30.423805    6025 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:30.423973    6025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:30.423977    6025 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:30.423979    6025 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:30.424120    6025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:30.425115    6025 out.go:298] Setting JSON to false
	I0731 12:13:30.441241    6025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4379,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:30.441306    6025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:30.445959    6025 out.go:177] * [no-preload-762000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:30.452976    6025 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:30.453035    6025 notify.go:220] Checking for updates...
	I0731 12:13:30.460883    6025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:30.463933    6025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:30.466879    6025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:30.469849    6025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:30.472869    6025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:30.476077    6025 config.go:182] Loaded profile config "no-preload-762000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:13:30.476336    6025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:30.480886    6025 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:13:30.487850    6025 start.go:297] selected driver: qemu2
	I0731 12:13:30.487857    6025 start.go:901] validating driver "qemu2" against &{Name:no-preload-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-762000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:30.487912    6025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:30.490399    6025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:30.490439    6025 cni.go:84] Creating CNI manager for ""
	I0731 12:13:30.490446    6025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:30.490473    6025 start.go:340] cluster config:
	{Name:no-preload-762000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-762000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:30.494101    6025 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.500812    6025 out.go:177] * Starting "no-preload-762000" primary control-plane node in "no-preload-762000" cluster
	I0731 12:13:30.504892    6025 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:13:30.504964    6025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/no-preload-762000/config.json ...
	I0731 12:13:30.504998    6025 cache.go:107] acquiring lock: {Name:mk5846a93176e83e4536114637fb0519d6d44fed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505028    6025 cache.go:107] acquiring lock: {Name:mk896e0d3a37aeee0b18ce883a25c3a4e496b894 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505066    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0731 12:13:30.505076    6025 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 80.417µs
	I0731 12:13:30.505090    6025 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0731 12:13:30.505091    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0731 12:13:30.505096    6025 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 69.292µs
	I0731 12:13:30.505096    6025 cache.go:107] acquiring lock: {Name:mkab61a9befdc8ee3aa9e3284d82f4b00197cb50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505069    6025 cache.go:107] acquiring lock: {Name:mk8e426e5e446ce89007d7dcc3403fadcfe43f36 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505125    6025 cache.go:107] acquiring lock: {Name:mke6fc33a2bb1cae441158ad564f9aa812858ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505128    6025 cache.go:107] acquiring lock: {Name:mkfc5188565a7549bfc1616fb0832c6b8c146621 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505140    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0731 12:13:30.505144    6025 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 48.125µs
	I0731 12:13:30.505147    6025 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0731 12:13:30.505106    6025 cache.go:107] acquiring lock: {Name:mka75fb82703b7c30de9c3b78f52259d1e26a533 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.504994    6025 cache.go:107] acquiring lock: {Name:mk1d12ca53e45b3e8b9e16d35f7498ea0f4170fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:30.505168    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0731 12:13:30.505172    6025 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 86.333µs
	I0731 12:13:30.505178    6025 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0731 12:13:30.505101    6025 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0731 12:13:30.505214    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0731 12:13:30.505219    6025 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 107.792µs
	I0731 12:13:30.505224    6025 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0731 12:13:30.505228    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0731 12:13:30.505230    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0731 12:13:30.505233    6025 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 127.708µs
	I0731 12:13:30.505237    6025 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0731 12:13:30.505231    6025 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 195.292µs
	I0731 12:13:30.505268    6025 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0731 12:13:30.505240    6025 cache.go:115] /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0731 12:13:30.505279    6025 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 282.542µs
	I0731 12:13:30.505288    6025 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0731 12:13:30.505292    6025 cache.go:87] Successfully saved all images to host disk.
	I0731 12:13:30.505406    6025 start.go:360] acquireMachinesLock for no-preload-762000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:30.505433    6025 start.go:364] duration metric: took 21.208µs to acquireMachinesLock for "no-preload-762000"
	I0731 12:13:30.505441    6025 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:30.505446    6025 fix.go:54] fixHost starting: 
	I0731 12:13:30.505551    6025 fix.go:112] recreateIfNeeded on no-preload-762000: state=Stopped err=<nil>
	W0731 12:13:30.505558    6025 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:30.512836    6025 out.go:177] * Restarting existing qemu2 VM for "no-preload-762000" ...
	I0731 12:13:30.516933    6025 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:30.516981    6025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:92:b0:d8:8b:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:30.518949    6025 main.go:141] libmachine: STDOUT: 
	I0731 12:13:30.518966    6025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:30.518996    6025 fix.go:56] duration metric: took 13.549083ms for fixHost
	I0731 12:13:30.519001    6025 start.go:83] releasing machines lock for "no-preload-762000", held for 13.56425ms
	W0731 12:13:30.519006    6025 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:30.519041    6025 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:30.519046    6025 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:35.521201    6025 start.go:360] acquireMachinesLock for no-preload-762000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:35.521719    6025 start.go:364] duration metric: took 399.459µs to acquireMachinesLock for "no-preload-762000"
	I0731 12:13:35.521859    6025 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:35.521881    6025 fix.go:54] fixHost starting: 
	I0731 12:13:35.522699    6025 fix.go:112] recreateIfNeeded on no-preload-762000: state=Stopped err=<nil>
	W0731 12:13:35.522727    6025 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:35.537882    6025 out.go:177] * Restarting existing qemu2 VM for "no-preload-762000" ...
	I0731 12:13:35.542033    6025 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:35.542232    6025 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:92:b0:d8:8b:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/no-preload-762000/disk.qcow2
	I0731 12:13:35.551476    6025 main.go:141] libmachine: STDOUT: 
	I0731 12:13:35.551535    6025 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:35.551619    6025 fix.go:56] duration metric: took 29.742666ms for fixHost
	I0731 12:13:35.551638    6025 start.go:83] releasing machines lock for "no-preload-762000", held for 29.878875ms
	W0731 12:13:35.551818    6025 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-762000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-762000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:35.558998    6025 out.go:177] 
	W0731 12:13:35.562149    6025 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:35.562191    6025 out.go:239] * 
	* 
	W0731 12:13:35.564674    6025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:35.572077    6025 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-762000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (65.871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.244996459s)

                                                
                                                
-- stdout --
	* [embed-certs-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-941000" primary control-plane node in "embed-certs-941000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:31.254556    6035 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:31.254688    6035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:31.254691    6035 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:31.254694    6035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:31.254821    6035 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:31.255887    6035 out.go:298] Setting JSON to false
	I0731 12:13:31.272008    6035 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4380,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:31.272087    6035 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:31.276873    6035 out.go:177] * [embed-certs-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:31.283882    6035 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:31.283942    6035 notify.go:220] Checking for updates...
	I0731 12:13:31.289842    6035 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:31.292872    6035 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:31.296886    6035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:31.299741    6035 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:31.302859    6035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:31.306134    6035 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:31.306214    6035 config.go:182] Loaded profile config "no-preload-762000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:13:31.306260    6035 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:31.309763    6035 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:13:31.316802    6035 start.go:297] selected driver: qemu2
	I0731 12:13:31.316807    6035 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:31.316815    6035 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:31.319164    6035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:13:31.322858    6035 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:13:31.325888    6035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:31.325918    6035 cni.go:84] Creating CNI manager for ""
	I0731 12:13:31.325927    6035 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:31.325931    6035 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:13:31.325958    6035 start.go:340] cluster config:
	{Name:embed-certs-941000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:31.329683    6035 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:31.337774    6035 out.go:177] * Starting "embed-certs-941000" primary control-plane node in "embed-certs-941000" cluster
	I0731 12:13:31.341860    6035 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:13:31.341873    6035 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:13:31.341882    6035 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:31.341948    6035 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:31.341954    6035 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:13:31.342008    6035 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/embed-certs-941000/config.json ...
	I0731 12:13:31.342026    6035 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/embed-certs-941000/config.json: {Name:mk804389703740da9e0d1cf026efda15a9923f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:13:31.342463    6035 start.go:360] acquireMachinesLock for embed-certs-941000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:31.342499    6035 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "embed-certs-941000"
	I0731 12:13:31.342509    6035 start.go:93] Provisioning new machine with config: &{Name:embed-certs-941000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:31.342542    6035 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:31.351829    6035 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:31.370387    6035 start.go:159] libmachine.API.Create for "embed-certs-941000" (driver="qemu2")
	I0731 12:13:31.370423    6035 client.go:168] LocalClient.Create starting
	I0731 12:13:31.370492    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:31.370523    6035 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:31.370533    6035 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:31.370573    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:31.370601    6035 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:31.370609    6035 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:31.371013    6035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:31.523668    6035 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:31.983618    6035 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:31.983632    6035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:31.984283    6035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:31.993689    6035 main.go:141] libmachine: STDOUT: 
	I0731 12:13:31.993712    6035 main.go:141] libmachine: STDERR: 
	I0731 12:13:31.993758    6035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2 +20000M
	I0731 12:13:32.001689    6035 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:32.001703    6035 main.go:141] libmachine: STDERR: 
	I0731 12:13:32.001717    6035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:32.001721    6035 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:32.001739    6035 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:32.001772    6035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:89:ee:4c:7b:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:32.003368    6035 main.go:141] libmachine: STDOUT: 
	I0731 12:13:32.003381    6035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:32.003398    6035 client.go:171] duration metric: took 632.981041ms to LocalClient.Create
	I0731 12:13:34.005535    6035 start.go:128] duration metric: took 2.663007459s to createHost
	I0731 12:13:34.005583    6035 start.go:83] releasing machines lock for "embed-certs-941000", held for 2.663118083s
	W0731 12:13:34.005654    6035 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:34.012885    6035 out.go:177] * Deleting "embed-certs-941000" in qemu2 ...
	W0731 12:13:34.048162    6035 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:34.048187    6035 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:39.050374    6035 start.go:360] acquireMachinesLock for embed-certs-941000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:39.050953    6035 start.go:364] duration metric: took 441.584µs to acquireMachinesLock for "embed-certs-941000"
	I0731 12:13:39.051094    6035 start.go:93] Provisioning new machine with config: &{Name:embed-certs-941000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:39.051422    6035 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:39.058964    6035 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:39.110619    6035 start.go:159] libmachine.API.Create for "embed-certs-941000" (driver="qemu2")
	I0731 12:13:39.110665    6035 client.go:168] LocalClient.Create starting
	I0731 12:13:39.110784    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:39.110856    6035 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:39.110876    6035 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:39.110945    6035 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:39.110988    6035 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:39.111002    6035 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:39.111520    6035 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:39.277509    6035 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:39.409248    6035 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:39.409254    6035 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:39.409454    6035 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:39.418809    6035 main.go:141] libmachine: STDOUT: 
	I0731 12:13:39.418827    6035 main.go:141] libmachine: STDERR: 
	I0731 12:13:39.418881    6035 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2 +20000M
	I0731 12:13:39.426635    6035 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:39.426650    6035 main.go:141] libmachine: STDERR: 
	I0731 12:13:39.426662    6035 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:39.426668    6035 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:39.426682    6035 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:39.426715    6035 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:07:c7:68:d5:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:39.428286    6035 main.go:141] libmachine: STDOUT: 
	I0731 12:13:39.428301    6035 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:39.428314    6035 client.go:171] duration metric: took 317.64725ms to LocalClient.Create
	I0731 12:13:41.430423    6035 start.go:128] duration metric: took 2.379013209s to createHost
	I0731 12:13:41.430528    6035 start.go:83] releasing machines lock for "embed-certs-941000", held for 2.379522833s
	W0731 12:13:41.430955    6035 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:41.440502    6035 out.go:177] 
	W0731 12:13:41.444329    6035 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:41.444356    6035 out.go:239] * 
	* 
	W0731 12:13:41.446967    6035 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:41.456412    6035 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (64.689708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-762000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (31.6495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-762000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-762000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-762000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.490833ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-762000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-762000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (29.544875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-762000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (28.905666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-762000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-762000 --alsologtostderr -v=1: exit status 83 (39.783958ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-762000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-762000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:35.835255    6057 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:35.835409    6057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:35.835412    6057 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:35.835415    6057 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:35.835543    6057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:35.835768    6057 out.go:298] Setting JSON to false
	I0731 12:13:35.835776    6057 mustload.go:65] Loading cluster: no-preload-762000
	I0731 12:13:35.835957    6057 config.go:182] Loaded profile config "no-preload-762000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:13:35.839640    6057 out.go:177] * The control-plane node no-preload-762000 host is not running: state=Stopped
	I0731 12:13:35.842598    6057 out.go:177]   To start a cluster, run: "minikube start -p no-preload-762000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-762000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (27.75525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (28.841083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-762000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.83757025s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-527000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-527000" primary control-plane node in "default-k8s-diff-port-527000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-527000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:36.248855    6081 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:36.248992    6081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:36.248995    6081 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:36.248997    6081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:36.249124    6081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:36.250202    6081 out.go:298] Setting JSON to false
	I0731 12:13:36.266532    6081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4385,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:36.266601    6081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:36.270541    6081 out.go:177] * [default-k8s-diff-port-527000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:36.278437    6081 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:36.278457    6081 notify.go:220] Checking for updates...
	I0731 12:13:36.286464    6081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:36.289478    6081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:36.292545    6081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:36.295531    6081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:36.298545    6081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:36.301856    6081 config.go:182] Loaded profile config "embed-certs-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:36.301916    6081 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:36.301976    6081 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:36.306511    6081 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:13:36.313490    6081 start.go:297] selected driver: qemu2
	I0731 12:13:36.313497    6081 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:36.313503    6081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:36.315847    6081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 12:13:36.318483    6081 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:13:36.321515    6081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:36.321531    6081 cni.go:84] Creating CNI manager for ""
	I0731 12:13:36.321541    6081 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:36.321545    6081 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:13:36.321590    6081 start.go:340] cluster config:
	{Name:default-k8s-diff-port-527000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:36.325270    6081 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:36.333523    6081 out.go:177] * Starting "default-k8s-diff-port-527000" primary control-plane node in "default-k8s-diff-port-527000" cluster
	I0731 12:13:36.337553    6081 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:13:36.337573    6081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:13:36.337586    6081 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:36.337670    6081 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:36.337684    6081 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:13:36.337745    6081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/default-k8s-diff-port-527000/config.json ...
	I0731 12:13:36.337757    6081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/default-k8s-diff-port-527000/config.json: {Name:mke5aa20b9987293d2e7d7cbb7993d5e125f40cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:13:36.338003    6081 start.go:360] acquireMachinesLock for default-k8s-diff-port-527000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:36.338038    6081 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "default-k8s-diff-port-527000"
	I0731 12:13:36.338049    6081 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:36.338082    6081 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:36.346480    6081 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:36.364280    6081 start.go:159] libmachine.API.Create for "default-k8s-diff-port-527000" (driver="qemu2")
	I0731 12:13:36.364303    6081 client.go:168] LocalClient.Create starting
	I0731 12:13:36.364369    6081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:36.364398    6081 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:36.364406    6081 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:36.364443    6081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:36.364465    6081 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:36.364471    6081 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:36.364801    6081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:36.515601    6081 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:36.647049    6081 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:36.647055    6081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:36.647288    6081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:36.656835    6081 main.go:141] libmachine: STDOUT: 
	I0731 12:13:36.656849    6081 main.go:141] libmachine: STDERR: 
	I0731 12:13:36.656911    6081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2 +20000M
	I0731 12:13:36.664745    6081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:36.664760    6081 main.go:141] libmachine: STDERR: 
	I0731 12:13:36.664769    6081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:36.664776    6081 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:36.664789    6081 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:36.664811    6081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:9c:0c:3a:63:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:36.666470    6081 main.go:141] libmachine: STDOUT: 
	I0731 12:13:36.666486    6081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:36.666504    6081 client.go:171] duration metric: took 302.201625ms to LocalClient.Create
	I0731 12:13:38.668675    6081 start.go:128] duration metric: took 2.330577375s to createHost
	I0731 12:13:38.668739    6081 start.go:83] releasing machines lock for "default-k8s-diff-port-527000", held for 2.33072825s
	W0731 12:13:38.668802    6081 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:38.678706    6081 out.go:177] * Deleting "default-k8s-diff-port-527000" in qemu2 ...
	W0731 12:13:38.709560    6081 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:38.709586    6081 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:43.711729    6081 start.go:360] acquireMachinesLock for default-k8s-diff-port-527000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:43.712103    6081 start.go:364] duration metric: took 306.666µs to acquireMachinesLock for "default-k8s-diff-port-527000"
	I0731 12:13:43.712171    6081 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:43.712451    6081 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:43.720202    6081 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:43.770094    6081 start.go:159] libmachine.API.Create for "default-k8s-diff-port-527000" (driver="qemu2")
	I0731 12:13:43.770155    6081 client.go:168] LocalClient.Create starting
	I0731 12:13:43.770246    6081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:43.770293    6081 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:43.770306    6081 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:43.770367    6081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:43.770395    6081 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:43.770404    6081 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:43.770959    6081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:43.935459    6081 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:43.992886    6081 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:43.992891    6081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:43.993137    6081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:44.002499    6081 main.go:141] libmachine: STDOUT: 
	I0731 12:13:44.002519    6081 main.go:141] libmachine: STDERR: 
	I0731 12:13:44.002573    6081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2 +20000M
	I0731 12:13:44.010319    6081 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:44.010333    6081 main.go:141] libmachine: STDERR: 
	I0731 12:13:44.010343    6081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:44.010347    6081 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:44.010358    6081 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:44.010384    6081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:86:d4:bf:84:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:44.012039    6081 main.go:141] libmachine: STDOUT: 
	I0731 12:13:44.012052    6081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:44.012064    6081 client.go:171] duration metric: took 241.907125ms to LocalClient.Create
	I0731 12:13:46.014299    6081 start.go:128] duration metric: took 2.301853041s to createHost
	I0731 12:13:46.014398    6081 start.go:83] releasing machines lock for "default-k8s-diff-port-527000", held for 2.302304875s
	W0731 12:13:46.014710    6081 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-527000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:46.028187    6081 out.go:177] 
	W0731 12:13:46.032287    6081 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:46.032323    6081 out.go:239] * 
	* 
	W0731 12:13:46.034756    6081 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:46.043196    6081 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (67.825833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-941000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-941000 create -f testdata/busybox.yaml: exit status 1 (29.879292ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-941000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-941000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (29.143834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (27.964208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-941000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-941000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-941000 describe deploy/metrics-server -n kube-system: exit status 1 (26.855583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-941000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-941000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (29.066875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.649847834s)

                                                
                                                
-- stdout --
	* [embed-certs-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-941000" primary control-plane node in "embed-certs-941000" cluster
	* Restarting existing qemu2 VM for "embed-certs-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:45.486472    6133 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:45.486630    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:45.486634    6133 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:45.486636    6133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:45.486754    6133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:45.487825    6133 out.go:298] Setting JSON to false
	I0731 12:13:45.503794    6133 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4394,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:45.503858    6133 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:45.507916    6133 out.go:177] * [embed-certs-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:45.514986    6133 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:45.515062    6133 notify.go:220] Checking for updates...
	I0731 12:13:45.521979    6133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:45.525026    6133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:45.527976    6133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:45.530949    6133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:45.533968    6133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:45.535608    6133 config.go:182] Loaded profile config "embed-certs-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:45.535890    6133 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:45.539058    6133 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:13:45.545802    6133 start.go:297] selected driver: qemu2
	I0731 12:13:45.545810    6133 start.go:901] validating driver "qemu2" against &{Name:embed-certs-941000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:45.545897    6133 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:45.548135    6133 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:45.548159    6133 cni.go:84] Creating CNI manager for ""
	I0731 12:13:45.548166    6133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:45.548194    6133 start.go:340] cluster config:
	{Name:embed-certs-941000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-941000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:45.551597    6133 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:45.558947    6133 out.go:177] * Starting "embed-certs-941000" primary control-plane node in "embed-certs-941000" cluster
	I0731 12:13:45.562934    6133 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:13:45.562949    6133 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:13:45.562958    6133 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:45.563014    6133 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:45.563019    6133 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:13:45.563079    6133 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/embed-certs-941000/config.json ...
	I0731 12:13:45.563593    6133 start.go:360] acquireMachinesLock for embed-certs-941000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:46.014528    6133 start.go:364] duration metric: took 450.905917ms to acquireMachinesLock for "embed-certs-941000"
	I0731 12:13:46.014665    6133 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:46.014703    6133 fix.go:54] fixHost starting: 
	I0731 12:13:46.015384    6133 fix.go:112] recreateIfNeeded on embed-certs-941000: state=Stopped err=<nil>
	W0731 12:13:46.015432    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:46.028187    6133 out.go:177] * Restarting existing qemu2 VM for "embed-certs-941000" ...
	I0731 12:13:46.035247    6133 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:46.035454    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:07:c7:68:d5:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:46.045247    6133 main.go:141] libmachine: STDOUT: 
	I0731 12:13:46.045335    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:46.045499    6133 fix.go:56] duration metric: took 30.80775ms for fixHost
	I0731 12:13:46.045519    6133 start.go:83] releasing machines lock for "embed-certs-941000", held for 30.952416ms
	W0731 12:13:46.045566    6133 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:46.045714    6133 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:46.045737    6133 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:51.047932    6133 start.go:360] acquireMachinesLock for embed-certs-941000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:51.048508    6133 start.go:364] duration metric: took 422.625µs to acquireMachinesLock for "embed-certs-941000"
	I0731 12:13:51.048642    6133 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:51.048662    6133 fix.go:54] fixHost starting: 
	I0731 12:13:51.049355    6133 fix.go:112] recreateIfNeeded on embed-certs-941000: state=Stopped err=<nil>
	W0731 12:13:51.049380    6133 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:51.059012    6133 out.go:177] * Restarting existing qemu2 VM for "embed-certs-941000" ...
	I0731 12:13:51.063001    6133 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:51.063231    6133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:07:c7:68:d5:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/embed-certs-941000/disk.qcow2
	I0731 12:13:51.072815    6133 main.go:141] libmachine: STDOUT: 
	I0731 12:13:51.072897    6133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:51.072981    6133 fix.go:56] duration metric: took 24.316209ms for fixHost
	I0731 12:13:51.073003    6133 start.go:83] releasing machines lock for "embed-certs-941000", held for 24.46575ms
	W0731 12:13:51.073208    6133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:51.081011    6133 out.go:177] 
	W0731 12:13:51.084183    6133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:51.084213    6133 out.go:239] * 
	* 
	W0731 12:13:51.086649    6133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:51.096011    6133 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-941000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (66.87675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-527000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527000 create -f testdata/busybox.yaml: exit status 1 (29.946875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-527000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-527000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (28.343208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (28.828417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-527000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-527000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527000 describe deploy/metrics-server -n kube-system: exit status 1 (27.378625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-527000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-527000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (29.476333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.186621708s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-527000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-527000" primary control-plane node in "default-k8s-diff-port-527000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-527000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:50.200393    6176 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:50.200539    6176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:50.200542    6176 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:50.200544    6176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:50.200681    6176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:50.201668    6176 out.go:298] Setting JSON to false
	I0731 12:13:50.217867    6176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4399,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:50.217922    6176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:50.222903    6176 out.go:177] * [default-k8s-diff-port-527000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:50.229863    6176 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:50.229901    6176 notify.go:220] Checking for updates...
	I0731 12:13:50.236844    6176 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:50.239859    6176 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:50.242752    6176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:50.245841    6176 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:50.248834    6176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:50.250502    6176 config.go:182] Loaded profile config "default-k8s-diff-port-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:50.250756    6176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:50.253842    6176 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:13:50.260723    6176 start.go:297] selected driver: qemu2
	I0731 12:13:50.260730    6176 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-527000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:50.260812    6176 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:50.262986    6176 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 12:13:50.263012    6176 cni.go:84] Creating CNI manager for ""
	I0731 12:13:50.263026    6176 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:50.263046    6176 start.go:340] cluster config:
	{Name:default-k8s-diff-port-527000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-527000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:50.266469    6176 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:50.274878    6176 out.go:177] * Starting "default-k8s-diff-port-527000" primary control-plane node in "default-k8s-diff-port-527000" cluster
	I0731 12:13:50.279850    6176 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 12:13:50.279866    6176 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 12:13:50.279877    6176 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:50.279936    6176 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:50.279942    6176 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0731 12:13:50.280013    6176 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/default-k8s-diff-port-527000/config.json ...
	I0731 12:13:50.280508    6176 start.go:360] acquireMachinesLock for default-k8s-diff-port-527000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:50.280535    6176 start.go:364] duration metric: took 21.125µs to acquireMachinesLock for "default-k8s-diff-port-527000"
	I0731 12:13:50.280543    6176 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:50.280549    6176 fix.go:54] fixHost starting: 
	I0731 12:13:50.280661    6176 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527000: state=Stopped err=<nil>
	W0731 12:13:50.280669    6176 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:50.283844    6176 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-527000" ...
	I0731 12:13:50.290842    6176 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:50.290882    6176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:86:d4:bf:84:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:50.292805    6176 main.go:141] libmachine: STDOUT: 
	I0731 12:13:50.292822    6176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:50.292854    6176 fix.go:56] duration metric: took 12.307625ms for fixHost
	I0731 12:13:50.292858    6176 start.go:83] releasing machines lock for "default-k8s-diff-port-527000", held for 12.319292ms
	W0731 12:13:50.292866    6176 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:50.292894    6176 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:50.292899    6176 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:55.295018    6176 start.go:360] acquireMachinesLock for default-k8s-diff-port-527000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:55.295446    6176 start.go:364] duration metric: took 336.625µs to acquireMachinesLock for "default-k8s-diff-port-527000"
	I0731 12:13:55.295547    6176 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:13:55.295571    6176 fix.go:54] fixHost starting: 
	I0731 12:13:55.296367    6176 fix.go:112] recreateIfNeeded on default-k8s-diff-port-527000: state=Stopped err=<nil>
	W0731 12:13:55.296397    6176 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:13:55.311903    6176 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-527000" ...
	I0731 12:13:55.315678    6176 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:55.315928    6176 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:86:d4:bf:84:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/default-k8s-diff-port-527000/disk.qcow2
	I0731 12:13:55.325064    6176 main.go:141] libmachine: STDOUT: 
	I0731 12:13:55.325123    6176 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:55.325206    6176 fix.go:56] duration metric: took 29.639292ms for fixHost
	I0731 12:13:55.325223    6176 start.go:83] releasing machines lock for "default-k8s-diff-port-527000", held for 29.754416ms
	W0731 12:13:55.325458    6176 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-527000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:55.332815    6176 out.go:177] 
	W0731 12:13:55.335725    6176 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:13:55.335748    6176 out.go:239] * 
	* 
	W0731 12:13:55.338479    6176 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:13:55.347768    6176 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-527000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (65.031209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-941000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (31.323959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-941000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.756042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-941000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-941000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (29.044875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-941000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (28.78175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-941000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-941000 --alsologtostderr -v=1: exit status 83 (40.744ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-941000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-941000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:51.360963    6195 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:51.361131    6195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:51.361134    6195 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:51.361137    6195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:51.361279    6195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:51.361493    6195 out.go:298] Setting JSON to false
	I0731 12:13:51.361499    6195 mustload.go:65] Loading cluster: embed-certs-941000
	I0731 12:13:51.361705    6195 config.go:182] Loaded profile config "embed-certs-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:51.366157    6195 out.go:177] * The control-plane node embed-certs-941000 host is not running: state=Stopped
	I0731 12:13:51.370142    6195 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-941000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-941000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (28.779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (28.877625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.865227125s)

                                                
                                                
-- stdout --
	* [newest-cni-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-139000" primary control-plane node in "newest-cni-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:51.673852    6212 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:51.673969    6212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:51.673972    6212 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:51.673974    6212 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:51.674097    6212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:51.675119    6212 out.go:298] Setting JSON to false
	I0731 12:13:51.691043    6212 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4400,"bootTime":1722448831,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:13:51.691132    6212 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:13:51.695212    6212 out.go:177] * [newest-cni-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:13:51.703115    6212 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:13:51.703163    6212 notify.go:220] Checking for updates...
	I0731 12:13:51.708554    6212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:13:51.712148    6212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:13:51.715221    6212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:13:51.718161    6212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:13:51.721135    6212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:13:51.724511    6212 config.go:182] Loaded profile config "default-k8s-diff-port-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:51.724575    6212 config.go:182] Loaded profile config "multinode-481000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:51.724626    6212 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:13:51.729160    6212 out.go:177] * Using the qemu2 driver based on user configuration
	I0731 12:13:51.736095    6212 start.go:297] selected driver: qemu2
	I0731 12:13:51.736100    6212 start.go:901] validating driver "qemu2" against <nil>
	I0731 12:13:51.736108    6212 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:13:51.738222    6212 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0731 12:13:51.738247    6212 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0731 12:13:51.745078    6212 out.go:177] * Automatically selected the socket_vmnet network
	I0731 12:13:51.748270    6212 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:13:51.748297    6212 cni.go:84] Creating CNI manager for ""
	I0731 12:13:51.748305    6212 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:13:51.748309    6212 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 12:13:51.748334    6212 start.go:340] cluster config:
	{Name:newest-cni-139000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:13:51.751998    6212 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:13:51.761127    6212 out.go:177] * Starting "newest-cni-139000" primary control-plane node in "newest-cni-139000" cluster
	I0731 12:13:51.765122    6212 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:13:51.765136    6212 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:13:51.765148    6212 cache.go:56] Caching tarball of preloaded images
	I0731 12:13:51.765206    6212 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:13:51.765212    6212 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:13:51.765271    6212 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/newest-cni-139000/config.json ...
	I0731 12:13:51.765282    6212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/newest-cni-139000/config.json: {Name:mka1b16e630ec4fcc2056c822dd680ae69d08c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 12:13:51.765507    6212 start.go:360] acquireMachinesLock for newest-cni-139000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:51.765543    6212 start.go:364] duration metric: took 29.333µs to acquireMachinesLock for "newest-cni-139000"
	I0731 12:13:51.765554    6212 start.go:93] Provisioning new machine with config: &{Name:newest-cni-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:51.765591    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:51.774172    6212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:51.792352    6212 start.go:159] libmachine.API.Create for "newest-cni-139000" (driver="qemu2")
	I0731 12:13:51.792377    6212 client.go:168] LocalClient.Create starting
	I0731 12:13:51.792440    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:51.792470    6212 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:51.792481    6212 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:51.792515    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:51.792539    6212 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:51.792548    6212 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:51.792910    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:51.966969    6212 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:52.066235    6212 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:52.066240    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:52.066457    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:52.075806    6212 main.go:141] libmachine: STDOUT: 
	I0731 12:13:52.075826    6212 main.go:141] libmachine: STDERR: 
	I0731 12:13:52.075877    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2 +20000M
	I0731 12:13:52.083768    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:52.083790    6212 main.go:141] libmachine: STDERR: 
	I0731 12:13:52.083802    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:52.083807    6212 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:52.083820    6212 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:52.083846    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:9a:6f:ce:14:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:52.085457    6212 main.go:141] libmachine: STDOUT: 
	I0731 12:13:52.085476    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:52.085492    6212 client.go:171] duration metric: took 293.115333ms to LocalClient.Create
	I0731 12:13:54.087637    6212 start.go:128] duration metric: took 2.322062417s to createHost
	I0731 12:13:54.087734    6212 start.go:83] releasing machines lock for "newest-cni-139000", held for 2.322172375s
	W0731 12:13:54.087804    6212 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:54.096771    6212 out.go:177] * Deleting "newest-cni-139000" in qemu2 ...
	W0731 12:13:54.123676    6212 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:13:54.123699    6212 start.go:729] Will try again in 5 seconds ...
	I0731 12:13:59.125855    6212 start.go:360] acquireMachinesLock for newest-cni-139000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:13:59.126443    6212 start.go:364] duration metric: took 479.209µs to acquireMachinesLock for "newest-cni-139000"
	I0731 12:13:59.126588    6212 start.go:93] Provisioning new machine with config: &{Name:newest-cni-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0731 12:13:59.126806    6212 start.go:125] createHost starting for "" (driver="qemu2")
	I0731 12:13:59.132146    6212 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 12:13:59.182423    6212 start.go:159] libmachine.API.Create for "newest-cni-139000" (driver="qemu2")
	I0731 12:13:59.182479    6212 client.go:168] LocalClient.Create starting
	I0731 12:13:59.182592    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/ca.pem
	I0731 12:13:59.182658    6212 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:59.182683    6212 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:59.182747    6212 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19356-1202/.minikube/certs/cert.pem
	I0731 12:13:59.182800    6212 main.go:141] libmachine: Decoding PEM data...
	I0731 12:13:59.182810    6212 main.go:141] libmachine: Parsing certificate...
	I0731 12:13:59.183550    6212 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0731 12:13:59.341047    6212 main.go:141] libmachine: Creating SSH key...
	I0731 12:13:59.445277    6212 main.go:141] libmachine: Creating Disk image...
	I0731 12:13:59.445287    6212 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0731 12:13:59.445503    6212 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:59.454827    6212 main.go:141] libmachine: STDOUT: 
	I0731 12:13:59.454846    6212 main.go:141] libmachine: STDERR: 
	I0731 12:13:59.454911    6212 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2 +20000M
	I0731 12:13:59.462883    6212 main.go:141] libmachine: STDOUT: Image resized.
	
	I0731 12:13:59.462898    6212 main.go:141] libmachine: STDERR: 
	I0731 12:13:59.462910    6212 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:59.462914    6212 main.go:141] libmachine: Starting QEMU VM...
	I0731 12:13:59.462922    6212 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:13:59.462958    6212 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9b:d9:43:67:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:13:59.464568    6212 main.go:141] libmachine: STDOUT: 
	I0731 12:13:59.464582    6212 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:13:59.464596    6212 client.go:171] duration metric: took 282.116791ms to LocalClient.Create
	I0731 12:14:01.466739    6212 start.go:128] duration metric: took 2.339941125s to createHost
	I0731 12:14:01.466790    6212 start.go:83] releasing machines lock for "newest-cni-139000", held for 2.340360916s
	W0731 12:14:01.467210    6212 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:01.480923    6212 out.go:177] 
	W0731 12:14:01.484930    6212 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:14:01.484966    6212 out.go:239] * 
	* 
	W0731 12:14:01.487795    6212 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:14:01.500898    6212 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (68.783834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-527000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (30.867208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-527000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-527000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-527000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.463708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-527000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-527000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (28.12175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-527000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (28.862083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-527000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-527000 --alsologtostderr -v=1: exit status 83 (40.501416ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-527000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-527000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:13:55.608931    6234 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:13:55.609142    6234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:55.609145    6234 out.go:304] Setting ErrFile to fd 2...
	I0731 12:13:55.609147    6234 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:13:55.609288    6234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:13:55.609505    6234 out.go:298] Setting JSON to false
	I0731 12:13:55.609511    6234 mustload.go:65] Loading cluster: default-k8s-diff-port-527000
	I0731 12:13:55.609703    6234 config.go:182] Loaded profile config "default-k8s-diff-port-527000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 12:13:55.613791    6234 out.go:177] * The control-plane node default-k8s-diff-port-527000 host is not running: state=Stopped
	I0731 12:13:55.617685    6234 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-527000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-527000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (28.19375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (27.726208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-527000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.180121459s)

                                                
                                                
-- stdout --
	* [newest-cni-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-139000" primary control-plane node in "newest-cni-139000" cluster
	* Restarting existing qemu2 VM for "newest-cni-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-139000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:14:05.064326    6285 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:05.064468    6285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:05.064471    6285 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:05.064474    6285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:05.064593    6285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:14:05.065634    6285 out.go:298] Setting JSON to false
	I0731 12:14:05.081728    6285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4414,"bootTime":1722448831,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 12:14:05.081789    6285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 12:14:05.086984    6285 out.go:177] * [newest-cni-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 12:14:05.094997    6285 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 12:14:05.095046    6285 notify.go:220] Checking for updates...
	I0731 12:14:05.099947    6285 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 12:14:05.102917    6285 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 12:14:05.104126    6285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 12:14:05.106923    6285 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 12:14:05.109964    6285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 12:14:05.113246    6285 config.go:182] Loaded profile config "newest-cni-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:14:05.113492    6285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 12:14:05.116875    6285 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 12:14:05.123909    6285 start.go:297] selected driver: qemu2
	I0731 12:14:05.123916    6285 start.go:901] validating driver "qemu2" against &{Name:newest-cni-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:05.123976    6285 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 12:14:05.126336    6285 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0731 12:14:05.126366    6285 cni.go:84] Creating CNI manager for ""
	I0731 12:14:05.126372    6285 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 12:14:05.126398    6285 start.go:340] cluster config:
	{Name:newest-cni-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-139000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 12:14:05.129858    6285 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 12:14:05.138893    6285 out.go:177] * Starting "newest-cni-139000" primary control-plane node in "newest-cni-139000" cluster
	I0731 12:14:05.142953    6285 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 12:14:05.142984    6285 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 12:14:05.142996    6285 cache.go:56] Caching tarball of preloaded images
	I0731 12:14:05.143050    6285 preload.go:172] Found /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0731 12:14:05.143055    6285 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 12:14:05.143121    6285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/newest-cni-139000/config.json ...
	I0731 12:14:05.143609    6285 start.go:360] acquireMachinesLock for newest-cni-139000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:14:05.143643    6285 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "newest-cni-139000"
	I0731 12:14:05.143651    6285 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:14:05.143657    6285 fix.go:54] fixHost starting: 
	I0731 12:14:05.143779    6285 fix.go:112] recreateIfNeeded on newest-cni-139000: state=Stopped err=<nil>
	W0731 12:14:05.143787    6285 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:14:05.147713    6285 out.go:177] * Restarting existing qemu2 VM for "newest-cni-139000" ...
	I0731 12:14:05.154916    6285 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:14:05.154960    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9b:d9:43:67:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:14:05.156929    6285 main.go:141] libmachine: STDOUT: 
	I0731 12:14:05.156945    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:14:05.156971    6285 fix.go:56] duration metric: took 13.316167ms for fixHost
	I0731 12:14:05.156975    6285 start.go:83] releasing machines lock for "newest-cni-139000", held for 13.328ms
	W0731 12:14:05.156981    6285 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:14:05.157008    6285 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:05.157012    6285 start.go:729] Will try again in 5 seconds ...
	I0731 12:14:10.159121    6285 start.go:360] acquireMachinesLock for newest-cni-139000: {Name:mk61f0d916b3a12d79421ffa249425b162f560b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 12:14:10.159466    6285 start.go:364] duration metric: took 271.5µs to acquireMachinesLock for "newest-cni-139000"
	I0731 12:14:10.159548    6285 start.go:96] Skipping create...Using existing machine configuration
	I0731 12:14:10.159567    6285 fix.go:54] fixHost starting: 
	I0731 12:14:10.160249    6285 fix.go:112] recreateIfNeeded on newest-cni-139000: state=Stopped err=<nil>
	W0731 12:14:10.160273    6285 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 12:14:10.168786    6285 out.go:177] * Restarting existing qemu2 VM for "newest-cni-139000" ...
	I0731 12:14:10.172747    6285 qemu.go:418] Using hvf for hardware acceleration
	I0731 12:14:10.173106    6285 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:9b:d9:43:67:ae -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19356-1202/.minikube/machines/newest-cni-139000/disk.qcow2
	I0731 12:14:10.181883    6285 main.go:141] libmachine: STDOUT: 
	I0731 12:14:10.181975    6285 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0731 12:14:10.182075    6285 fix.go:56] duration metric: took 22.506ms for fixHost
	I0731 12:14:10.182094    6285 start.go:83] releasing machines lock for "newest-cni-139000", held for 22.603791ms
	W0731 12:14:10.182245    6285 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-139000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0731 12:14:10.188765    6285 out.go:177] 
	W0731 12:14:10.192845    6285 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0731 12:14:10.192869    6285 out.go:239] * 
	* 
	W0731 12:14:10.195604    6285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 12:14:10.203642    6285 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-139000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (67.448584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-139000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (29.062333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-139000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-139000 --alsologtostderr -v=1: exit status 83 (39.496458ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-139000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-139000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 12:14:10.388241    6299 out.go:291] Setting OutFile to fd 1 ...
	I0731 12:14:10.388399    6299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:10.388403    6299 out.go:304] Setting ErrFile to fd 2...
	I0731 12:14:10.388405    6299 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 12:14:10.388550    6299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 12:14:10.388780    6299 out.go:298] Setting JSON to false
	I0731 12:14:10.388785    6299 mustload.go:65] Loading cluster: newest-cni-139000
	I0731 12:14:10.388978    6299 config.go:182] Loaded profile config "newest-cni-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0731 12:14:10.391852    6299 out.go:177] * The control-plane node newest-cni-139000 host is not running: state=Stopped
	I0731 12:14:10.395798    6299 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-139000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-139000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (28.923166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (29.452667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-139000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 6.55
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.8
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 139.11
38 TestAddons/serial/Volcano 36.95
40 TestAddons/serial/GCPAuth/Namespaces 0.08
42 TestAddons/parallel/Registry 13.92
43 TestAddons/parallel/Ingress 18.05
44 TestAddons/parallel/InspektorGadget 10.23
45 TestAddons/parallel/MetricsServer 5.26
48 TestAddons/parallel/CSI 56.54
49 TestAddons/parallel/Headlamp 10.43
50 TestAddons/parallel/CloudSpanner 5.18
51 TestAddons/parallel/LocalPath 40.8
52 TestAddons/parallel/NvidiaDevicePlugin 5.15
53 TestAddons/parallel/Yakd 10.22
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 11.32
65 TestErrorSpam/setup 34.68
66 TestErrorSpam/start 0.35
67 TestErrorSpam/status 0.24
68 TestErrorSpam/pause 0.63
69 TestErrorSpam/unpause 0.56
70 TestErrorSpam/stop 64.3
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 49.35
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 60.07
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.53
82 TestFunctional/serial/CacheCmd/cache/add_local 1.08
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
86 TestFunctional/serial/CacheCmd/cache/cache_reload 0.65
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.75
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.92
90 TestFunctional/serial/ExtraConfig 38.22
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.67
93 TestFunctional/serial/LogsFileCmd 0.59
94 TestFunctional/serial/InvalidService 4.01
96 TestFunctional/parallel/ConfigCmd 0.21
97 TestFunctional/parallel/DashboardCmd 8.52
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.11
100 TestFunctional/parallel/StatusCmd 0.25
105 TestFunctional/parallel/AddonsCmd 0.09
106 TestFunctional/parallel/PersistentVolumeClaim 25.59
108 TestFunctional/parallel/SSHCmd 0.13
109 TestFunctional/parallel/CpCmd 0.46
111 TestFunctional/parallel/FileSync 0.07
112 TestFunctional/parallel/CertSync 0.39
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.17
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.11
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.88
128 TestFunctional/parallel/ImageCommands/Setup 1.72
129 TestFunctional/parallel/DockerEnv/bash 0.27
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
133 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.47
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.18
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 3.94
161 TestFunctional/parallel/MountCmd/specific-port 1.04
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.01
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 196.47
170 TestMultiControlPlane/serial/DeployApp 4.06
171 TestMultiControlPlane/serial/PingHostFromPods 0.75
172 TestMultiControlPlane/serial/AddWorkerNode 53.46
173 TestMultiControlPlane/serial/NodeLabels 0.13
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
175 TestMultiControlPlane/serial/CopyFile 4.16
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.09
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 1.92
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 1.48
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.38
282 TestNoKubernetes/serial/Stop 3.52
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
296 TestStoppedBinaryUpgrade/MinikubeLogs 0.62
299 TestStartStop/group/old-k8s-version/serial/Stop 3.67
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
310 TestStartStop/group/no-preload/serial/Stop 3.29
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.11
323 TestStartStop/group/embed-certs/serial/Stop 3.6
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.72
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
343 TestStartStop/group/newest-cni/serial/Stop 3.27
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-382000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-382000: exit status 85 (95.76275ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-382000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |          |
	|         | -p download-only-382000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 11:14:00
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:14:00.365281    1705 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:14:00.365419    1705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:00.365423    1705 out.go:304] Setting ErrFile to fd 2...
	I0731 11:14:00.365425    1705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:00.365551    1705 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	W0731 11:14:00.365636    1705 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19356-1202/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19356-1202/.minikube/config/config.json: no such file or directory
	I0731 11:14:00.366845    1705 out.go:298] Setting JSON to true
	I0731 11:14:00.384067    1705 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":809,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:14:00.384130    1705 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:14:00.390530    1705 out.go:97] [download-only-382000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:14:00.390666    1705 notify.go:220] Checking for updates...
	W0731 11:14:00.390730    1705 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 11:14:00.394452    1705 out.go:169] MINIKUBE_LOCATION=19356
	I0731 11:14:00.397499    1705 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:14:00.402476    1705 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:14:00.405516    1705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:14:00.408443    1705 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	W0731 11:14:00.414514    1705 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:14:00.414748    1705 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:14:00.420510    1705 out.go:97] Using the qemu2 driver based on user configuration
	I0731 11:14:00.420529    1705 start.go:297] selected driver: qemu2
	I0731 11:14:00.420543    1705 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:14:00.420621    1705 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:14:00.424476    1705 out.go:169] Automatically selected the socket_vmnet network
	I0731 11:14:00.430271    1705 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 11:14:00.430401    1705 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:14:00.430466    1705 cni.go:84] Creating CNI manager for ""
	I0731 11:14:00.430482    1705 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0731 11:14:00.430540    1705 start.go:340] cluster config:
	{Name:download-only-382000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:14:00.436016    1705 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:14:00.440486    1705 out.go:97] Downloading VM boot image ...
	I0731 11:14:00.440504    1705 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0731 11:14:05.134704    1705 out.go:97] Starting "download-only-382000" primary control-plane node in "download-only-382000" cluster
	I0731 11:14:05.134729    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:05.212403    1705 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:05.212424    1705 cache.go:56] Caching tarball of preloaded images
	I0731 11:14:05.212614    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:05.217755    1705 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 11:14:05.217763    1705 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:05.293884    1705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:10.734025    1705 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:10.734343    1705 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:11.429638    1705 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0731 11:14:11.429835    1705 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-382000/config.json ...
	I0731 11:14:11.429854    1705 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-382000/config.json: {Name:mk1a7121662644079b464b6bc0c63858f6cc49b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:14:11.430087    1705 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0731 11:14:11.430289    1705 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0731 11:14:11.885864    1705 out.go:169] 
	W0731 11:14:11.891819    1705 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60 0x104789a60] Decompressors:map[bz2:0x1400081adb0 gz:0x1400081adb8 tar:0x1400081ad60 tar.bz2:0x1400081ad70 tar.gz:0x1400081ad80 tar.xz:0x1400081ad90 tar.zst:0x1400081ada0 tbz2:0x1400081ad70 tgz:0x1400081ad80 txz:0x1400081ad90 tzst:0x1400081ada0 xz:0x1400081adc0 zip:0x1400081add0 zst:0x1400081adc8] Getters:map[file:0x140007d8550 http:0x1400088c320 https:0x1400088c370] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0731 11:14:11.891844    1705 out_reason.go:110] 
	W0731 11:14:11.898859    1705 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 11:14:11.902655    1705 out.go:169] 
	
	
	* The control-plane node download-only-382000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-382000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-382000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (6.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-754000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-754000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (6.545866708s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (6.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-754000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-754000: exit status 85 (78.251167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-382000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |                     |
	|         | -p download-only-382000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| delete  | -p download-only-382000        | download-only-382000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| start   | -o=json --download-only        | download-only-754000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |                     |
	|         | -p download-only-754000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 11:14:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:14:12.316603    1733 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:14:12.316725    1733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:12.316728    1733 out.go:304] Setting ErrFile to fd 2...
	I0731 11:14:12.316730    1733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:12.316867    1733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:14:12.317912    1733 out.go:298] Setting JSON to true
	I0731 11:14:12.334011    1733 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":821,"bootTime":1722448831,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:14:12.334073    1733 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:14:12.339074    1733 out.go:97] [download-only-754000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:14:12.339176    1733 notify.go:220] Checking for updates...
	I0731 11:14:12.343083    1733 out.go:169] MINIKUBE_LOCATION=19356
	I0731 11:14:12.346089    1733 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:14:12.350102    1733 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:14:12.353045    1733 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:14:12.356085    1733 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	W0731 11:14:12.362050    1733 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:14:12.362181    1733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:14:12.364987    1733 out.go:97] Using the qemu2 driver based on user configuration
	I0731 11:14:12.364996    1733 start.go:297] selected driver: qemu2
	I0731 11:14:12.364999    1733 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:14:12.365044    1733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:14:12.367984    1733 out.go:169] Automatically selected the socket_vmnet network
	I0731 11:14:12.373310    1733 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 11:14:12.373429    1733 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:14:12.373446    1733 cni.go:84] Creating CNI manager for ""
	I0731 11:14:12.373452    1733 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:14:12.373457    1733 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:14:12.373506    1733 start.go:340] cluster config:
	{Name:download-only-754000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-754000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:14:12.377088    1733 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:14:12.380134    1733 out.go:97] Starting "download-only-754000" primary control-plane node in "download-only-754000" cluster
	I0731 11:14:12.380141    1733 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:14:12.436942    1733 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:14:12.436966    1733 cache.go:56] Caching tarball of preloaded images
	I0731 11:14:12.437123    1733 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0731 11:14:12.442210    1733 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 11:14:12.442218    1733 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:12.521242    1733 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0731 11:14:17.033774    1733 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:17.034073    1733 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-754000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-754000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-687000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-687000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (6.798619333s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-687000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-687000: exit status 85 (72.321208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-382000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |                     |
	|         | -p download-only-382000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| delete  | -p download-only-382000             | download-only-382000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| start   | -o=json --download-only             | download-only-754000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |                     |
	|         | -p download-only-754000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| delete  | -p download-only-754000             | download-only-754000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT | 31 Jul 24 11:14 PDT |
	| start   | -o=json --download-only             | download-only-687000 | jenkins | v1.33.1 | 31 Jul 24 11:14 PDT |                     |
	|         | -p download-only-687000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 11:14:19
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 11:14:19.150713    1757 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:14:19.150839    1757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:19.150842    1757 out.go:304] Setting ErrFile to fd 2...
	I0731 11:14:19.150845    1757 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:14:19.150972    1757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:14:19.152039    1757 out.go:298] Setting JSON to true
	I0731 11:14:19.168214    1757 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":828,"bootTime":1722448831,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:14:19.168272    1757 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:14:19.172817    1757 out.go:97] [download-only-687000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:14:19.172892    1757 notify.go:220] Checking for updates...
	I0731 11:14:19.175678    1757 out.go:169] MINIKUBE_LOCATION=19356
	I0731 11:14:19.179792    1757 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:14:19.183801    1757 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:14:19.186808    1757 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:14:19.189776    1757 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	W0731 11:14:19.195762    1757 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 11:14:19.195905    1757 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:14:19.198706    1757 out.go:97] Using the qemu2 driver based on user configuration
	I0731 11:14:19.198714    1757 start.go:297] selected driver: qemu2
	I0731 11:14:19.198718    1757 start.go:901] validating driver "qemu2" against <nil>
	I0731 11:14:19.198762    1757 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 11:14:19.201725    1757 out.go:169] Automatically selected the socket_vmnet network
	I0731 11:14:19.205216    1757 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0731 11:14:19.205314    1757 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 11:14:19.205358    1757 cni.go:84] Creating CNI manager for ""
	I0731 11:14:19.205366    1757 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0731 11:14:19.205373    1757 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 11:14:19.205429    1757 start.go:340] cluster config:
	{Name:download-only-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-687000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:14:19.208862    1757 iso.go:125] acquiring lock: {Name:mk7f09037d54cdd1c4452a219996965da3d4677d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 11:14:19.211718    1757 out.go:97] Starting "download-only-687000" primary control-plane node in "download-only-687000" cluster
	I0731 11:14:19.211724    1757 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 11:14:19.284009    1757 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:19.284031    1757 cache.go:56] Caching tarball of preloaded images
	I0731 11:14:19.284256    1757 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 11:14:19.289502    1757 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 11:14:19.289510    1757 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:19.371272    1757 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0731 11:14:23.521748    1757 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:23.521926    1757 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0731 11:14:24.041401    1757 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0731 11:14:24.041625    1757 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-687000/config.json ...
	I0731 11:14:24.041642    1757 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/download-only-687000/config.json: {Name:mkcf782e1db4e821b68099f9009fee760fb71715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 11:14:24.041918    1757 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0731 11:14:24.042047    1757 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19356-1202/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-687000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-687000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-687000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-256000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-256000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-256000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-241000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-241000: exit status 85 (54.743958ms)

                                                
                                                
-- stdout --
	* Profile "addons-241000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-241000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-241000: exit status 85 (58.531042ms)

                                                
                                                
-- stdout --
	* Profile "addons-241000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-241000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (139.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-241000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-241000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m19.10683225s)
--- PASS: TestAddons/Setup (139.11s)

                                                
                                    
x
+
TestAddons/serial/Volcano (36.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 6.433167ms
addons_test.go:905: volcano-admission stabilized in 6.450458ms
addons_test.go:913: volcano-controller stabilized in 6.534083ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-kccrz" [37b62525-ca62-4af1-af84-a07098b48593] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00405825s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-5t24q" [11ffd420-6ca3-463b-8d60-b51102d86400] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004114s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-cxfjg" [4f3812dd-72d4-4a8b-b656-d3aaaf850c41] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003640833s
addons_test.go:932: (dbg) Run:  kubectl --context addons-241000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-241000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-241000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [9f52f265-f7da-48de-a813-7a6738be1339] Pending
helpers_test.go:344: "test-job-nginx-0" [9f52f265-f7da-48de-a813-7a6738be1339] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [9f52f265-f7da-48de-a813-7a6738be1339] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00492375s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable volcano --alsologtostderr -v=1: (9.712986833s)
--- PASS: TestAddons/serial/Volcano (36.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-241000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-241000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.173209ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-9w9qt" [ecc3900e-55fb-4343-826d-9bf594fb611e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004159s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-hh9xv" [c277381e-4b8e-4e8a-a757-2179d6fe116b] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004415292s
addons_test.go:342: (dbg) Run:  kubectl --context addons-241000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-241000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-241000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.634284375s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 ip
2024/07/31 11:17:53 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-241000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-241000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-241000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2ee5e434-2b5c-418d-9e95-295d9aed6b15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2ee5e434-2b5c-418d-9e95-295d9aed6b15] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004113625s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-241000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable ingress-dns --alsologtostderr -v=1: (1.279655833s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable ingress --alsologtostderr -v=1: (7.205019667s)
--- PASS: TestAddons/parallel/Ingress (18.05s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-l5hj5" [f29ef6d7-2dec-4e53-9d0c-98c61630b43c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004290458s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-241000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-241000: (5.228186s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.436875ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-v57pv" [1170121e-1497-4221-bb13-fcefdad9dd8f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004091542s
addons_test.go:417: (dbg) Run:  kubectl --context addons-241000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.26s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.621167ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-241000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-241000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a66211e6-9c35-41d0-85bf-e7d534cb1a30] Pending
helpers_test.go:344: "task-pv-pod" [a66211e6-9c35-41d0-85bf-e7d534cb1a30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a66211e6-9c35-41d0-85bf-e7d534cb1a30] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004165291s
addons_test.go:590: (dbg) Run:  kubectl --context addons-241000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-241000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-241000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-241000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-241000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-241000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-241000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ac285933-7fb5-4c97-89e3-ebab9a21f47c] Pending
helpers_test.go:344: "task-pv-pod-restore" [ac285933-7fb5-4c97-89e3-ebab9a21f47c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ac285933-7fb5-4c97-89e3-ebab9a21f47c] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003508833s
addons_test.go:632: (dbg) Run:  kubectl --context addons-241000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-241000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-241000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.081873833s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-241000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-psw8r" [343b19cb-7143-45d2-85be-d0361a199680] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-psw8r" [343b19cb-7143-45d2-85be-d0361a199680] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00403425s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (10.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-jqc9f" [67af21eb-284a-49e2-9cc3-e1b654eb9d66] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003969667s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-241000
--- PASS: TestAddons/parallel/CloudSpanner (5.18s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-241000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-241000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [89be4243-4f5d-4766-9723-f723f00f45ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [89be4243-4f5d-4766-9723-f723f00f45ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [89be4243-4f5d-4766-9723-f723f00f45ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003916625s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-241000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 ssh "cat /opt/local-path-provisioner/pvc-dab5f49f-a897-47de-8700-6eb6bfc67b58_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-241000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-241000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.342586625s)
--- PASS: TestAddons/parallel/LocalPath (40.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gqtwt" [07744792-b67b-47c0-85b3-391b54ed9322] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004096666s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-241000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6d9n2" [f2947458-009b-4dc4-8a8a-0254373ac0b4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004478917s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-241000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-241000 addons disable yakd --alsologtostderr -v=1: (5.21271225s)
--- PASS: TestAddons/parallel/Yakd (10.22s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-241000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-241000: (12.205322208s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-241000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-241000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-241000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.32s)

                                                
                                    
x
+
TestErrorSpam/setup (34.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-346000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-346000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 --driver=qemu2 : (34.678267459s)
--- PASS: TestErrorSpam/setup (34.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 unpause
--- PASS: TestErrorSpam/unpause (0.56s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop: (12.197030959s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop: (26.065340125s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-346000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-346000 stop: (26.033763792s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19356-1202/.minikube/files/etc/test/nested/copy/1701/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.35s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0731 11:21:45.987285    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:45.994129    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.006187    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.026567    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.068660    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.150763    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.312872    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:46.634955    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:47.277082    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:48.559182    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:21:51.121264    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-080000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.345249625s)
--- PASS: TestFunctional/serial/StartWithProxy (49.35s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (60.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --alsologtostderr -v=8
E0731 11:21:56.243355    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:22:06.485438    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:22:26.966427    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-080000 --alsologtostderr -v=8: (1m0.0662535s)
functional_test.go:659: soft start took 1m0.066645625s for "functional-080000" cluster.
--- PASS: TestFunctional/serial/SoftStart (60.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-080000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3959460160/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache add minikube-local-cache-test:functional-080000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache delete minikube-local-cache-test:functional-080000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-080000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.192458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 kubectl -- --context functional-080000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-080000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.92s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 11:23:07.927048    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-080000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.224643s)
functional_test.go:757: restart took 38.224751875s for "functional-080000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-080000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd236974008/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-080000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-080000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-080000: exit status 115 (102.711584ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32191 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-080000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 config get cpus: exit status 14 (29.564833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 config get cpus: exit status 14 (28.338083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-080000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-080000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-080000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.46275ms)

                                                
                                                
-- stdout --
	* [functional-080000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:24:30.775298    2564 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:24:30.775431    2564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.775435    2564 out.go:304] Setting ErrFile to fd 2...
	I0731 11:24:30.775437    2564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.775558    2564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:24:30.776510    2564 out.go:298] Setting JSON to false
	I0731 11:24:30.794007    2564 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1439,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:24:30.794080    2564 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:24:30.800173    2564 out.go:177] * [functional-080000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0731 11:24:30.807135    2564 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:24:30.807231    2564 notify.go:220] Checking for updates...
	I0731 11:24:30.815190    2564 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:24:30.818110    2564 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:24:30.821146    2564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:24:30.824207    2564 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:24:30.827099    2564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:24:30.830444    2564 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:24:30.830715    2564 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:24:30.835150    2564 out.go:177] * Using the qemu2 driver based on existing profile
	I0731 11:24:30.842071    2564 start.go:297] selected driver: qemu2
	I0731 11:24:30.842077    2564 start.go:901] validating driver "qemu2" against &{Name:functional-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:24:30.842131    2564 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:24:30.849215    2564 out.go:177] 
	W0731 11:24:30.853153    2564 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 11:24:30.856199    2564 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-080000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-080000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.983583ms)

                                                
                                                
-- stdout --
	* [functional-080000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 11:24:30.999091    2575 out.go:291] Setting OutFile to fd 1 ...
	I0731 11:24:30.999198    2575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.999206    2575 out.go:304] Setting ErrFile to fd 2...
	I0731 11:24:30.999208    2575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 11:24:30.999338    2575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
	I0731 11:24:31.000715    2575 out.go:298] Setting JSON to false
	I0731 11:24:31.017985    2575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1440,"bootTime":1722448831,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0731 11:24:31.018091    2575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0731 11:24:31.022180    2575 out.go:177] * [functional-080000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0731 11:24:31.030151    2575 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 11:24:31.030235    2575 notify.go:220] Checking for updates...
	I0731 11:24:31.038152    2575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	I0731 11:24:31.041142    2575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0731 11:24:31.042502    2575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 11:24:31.045171    2575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	I0731 11:24:31.048161    2575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 11:24:31.051473    2575 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0731 11:24:31.051719    2575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 11:24:31.056153    2575 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0731 11:24:31.063154    2575 start.go:297] selected driver: qemu2
	I0731 11:24:31.063162    2575 start.go:901] validating driver "qemu2" against &{Name:functional-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-080000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 11:24:31.063210    2575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 11:24:31.069098    2575 out.go:177] 
	W0731 11:24:31.073208    2575 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 11:24:31.077081    2575 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9256a135-6062-49cd-8f4e-9b50eea2c57f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003511917s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-080000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-080000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-080000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-080000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92a6dd0a-ca61-42e9-9471-f79dd6a1276f] Pending
helpers_test.go:344: "sp-pod" [92a6dd0a-ca61-42e9-9471-f79dd6a1276f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92a6dd0a-ca61-42e9-9471-f79dd6a1276f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003622792s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-080000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-080000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-080000 delete -f testdata/storage-provisioner/pod.yaml: (1.188866s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-080000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c1813d9e-2da1-4b47-a806-2dca1c846fed] Pending
helpers_test.go:344: "sp-pod" [c1813d9e-2da1-4b47-a806-2dca1c846fed] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c1813d9e-2da1-4b47-a806-2dca1c846fed] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00427425s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-080000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.59s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh -n functional-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cp functional-080000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3350949811/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh -n functional-080000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh -n functional-080000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1701/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /etc/test/nested/copy/1701/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1701.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /etc/ssl/certs/1701.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1701.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /usr/share/ca-certificates/1701.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /etc/ssl/certs/17012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /usr/share/ca-certificates/17012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-080000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh "sudo systemctl is-active crio": exit status 1 (63.409208ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-080000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-080000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-080000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-080000 image ls --format short --alsologtostderr:
I0731 11:24:36.491375    2608 out.go:291] Setting OutFile to fd 1 ...
I0731 11:24:36.491569    2608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.491573    2608 out.go:304] Setting ErrFile to fd 2...
I0731 11:24:36.491575    2608 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.491711    2608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:24:36.492139    2608 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.492202    2608 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.493051    2608 ssh_runner.go:195] Run: systemctl --version
I0731 11:24:36.493059    2608 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/functional-080000/id_rsa Username:docker}
I0731 11:24:36.518006    2608 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-080000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-080000 | 513b582c1d65a | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| docker.io/kicbase/echo-server               | functional-080000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | latest            | 43b17fe33c4b4 | 193MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | d7cd33d7d4ed1 | 44.8MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-080000 | 2a66b8512b6d2 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-080000 image ls --format table --alsologtostderr:
I0731 11:24:38.630032    2621 out.go:291] Setting OutFile to fd 1 ...
I0731 11:24:38.630207    2621 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:38.630210    2621 out.go:304] Setting ErrFile to fd 2...
I0731 11:24:38.630213    2621 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:38.630341    2621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:24:38.630780    2621 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:38.630841    2621 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:38.631683    2621 ssh_runner.go:195] Run: systemctl --version
I0731 11:24:38.631691    2621 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/functional-080000/id_rsa Username:docker}
I0731 11:24:38.656645    2621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/07/31 11:24:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-080000 image ls --format json --alsologtostderr:
[{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-080000"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3
f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"513b582c1d65ae94867ced4bf386019bc45cf8f4dba74360199d2a9243c255ee","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-080000"],"size":"1410000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2a66b8512b6d2265733652aaf3ddd0a19ef091575fbbe792891310342c4b8c8c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test
:functional-080000"],"size":"30"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size"
:"3550000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-080000 image ls --format json --alsologtostderr:
I0731 11:24:38.560548    2619 out.go:291] Setting OutFile to fd 1 ...
I0731 11:24:38.560717    2619 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:38.560721    2619 out.go:304] Setting ErrFile to fd 2...
I0731 11:24:38.560723    2619 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:38.560883    2619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:24:38.561392    2619 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:38.561453    2619 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:38.562387    2619 ssh_runner.go:195] Run: systemctl --version
I0731 11:24:38.562396    2619 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/functional-080000/id_rsa Username:docker}
I0731 11:24:38.586919    2619 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-080000 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-080000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 2a66b8512b6d2265733652aaf3ddd0a19ef091575fbbe792891310342c4b8c8c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-080000
size: "30"
- id: d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-080000 image ls --format yaml --alsologtostderr:
I0731 11:24:36.568476    2610 out.go:291] Setting OutFile to fd 1 ...
I0731 11:24:36.568629    2610 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.568632    2610 out.go:304] Setting ErrFile to fd 2...
I0731 11:24:36.568634    2610 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.568760    2610 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:24:36.569169    2610 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.569231    2610 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.570025    2610 ssh_runner.go:195] Run: systemctl --version
I0731 11:24:36.570034    2610 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/functional-080000/id_rsa Username:docker}
I0731 11:24:36.596631    2610 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh pgrep buildkitd: exit status 1 (59.269416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image build -t localhost/my-image:functional-080000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-080000 image build -t localhost/my-image:functional-080000 testdata/build --alsologtostderr: (1.751278459s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-080000 image build -t localhost/my-image:functional-080000 testdata/build --alsologtostderr:
I0731 11:24:36.736693    2614 out.go:291] Setting OutFile to fd 1 ...
I0731 11:24:36.736967    2614 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.736974    2614 out.go:304] Setting ErrFile to fd 2...
I0731 11:24:36.736977    2614 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 11:24:36.737130    2614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19356-1202/.minikube/bin
I0731 11:24:36.737611    2614 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.738414    2614 config.go:182] Loaded profile config "functional-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0731 11:24:36.739389    2614 ssh_runner.go:195] Run: systemctl --version
I0731 11:24:36.739403    2614 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19356-1202/.minikube/machines/functional-080000/id_rsa Username:docker}
I0731 11:24:36.765517    2614 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.651044347.tar
I0731 11:24:36.765590    2614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 11:24:36.769078    2614 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.651044347.tar
I0731 11:24:36.770432    2614 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.651044347.tar: stat -c "%s %y" /var/lib/minikube/build/build.651044347.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.651044347.tar': No such file or directory
I0731 11:24:36.770444    2614 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.651044347.tar --> /var/lib/minikube/build/build.651044347.tar (3072 bytes)
I0731 11:24:36.786411    2614 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.651044347
I0731 11:24:36.790652    2614 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.651044347 -xf /var/lib/minikube/build/build.651044347.tar
I0731 11:24:36.795015    2614 docker.go:360] Building image: /var/lib/minikube/build/build.651044347
I0731 11:24:36.795087    2614 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-080000 /var/lib/minikube/build/build.651044347
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.2s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:513b582c1d65ae94867ced4bf386019bc45cf8f4dba74360199d2a9243c255ee done
#8 naming to localhost/my-image:functional-080000 done
#8 DONE 0.0s
I0731 11:24:38.434210    2614 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-080000 /var/lib/minikube/build/build.651044347: (1.639137916s)
I0731 11:24:38.434274    2614 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.651044347
I0731 11:24:38.438092    2614 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.651044347.tar
I0731 11:24:38.441394    2614 build_images.go:217] Built localhost/my-image:functional-080000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.651044347.tar
I0731 11:24:38.441411    2614 build_images.go:133] succeeded building to: functional-080000
I0731 11:24:38.441414    2614 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.70659875s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-080000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-080000 docker-env) && out/minikube-darwin-arm64 status -p functional-080000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-080000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-080000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-080000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-mgcts" [f651bfb2-fe9f-4d10-9086-f6ae6692d12c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-mgcts" [f651bfb2-fe9f-4d10-9086-f6ae6692d12c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004209333s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image load --daemon docker.io/kicbase/echo-server:functional-080000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image load --daemon docker.io/kicbase/echo-server:functional-080000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-080000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image load --daemon docker.io/kicbase/echo-server:functional-080000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image save docker.io/kicbase/echo-server:functional-080000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image rm docker.io/kicbase/echo-server:functional-080000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-080000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 image save --daemon docker.io/kicbase/echo-server:functional-080000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-080000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2430: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-080000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [35af5520-99cd-4b3c-80a5-ba5d69fe5abc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [35af5520-99cd-4b3c-80a5-ba5d69fe5abc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00351275s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service list -o json
functional_test.go:1490: Took "82.905958ms" to run "out/minikube-darwin-arm64 -p functional-080000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30269
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30269
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-080000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.212.43 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-080000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "84.901125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "34.76025ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "84.0975ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "33.6535ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (3.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722450264749385000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722450264749385000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722450264749385000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001/test-1722450264749385000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.996792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 18:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 18:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 18:24 test-1722450264749385000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh cat /mount-9p/test-1722450264749385000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-080000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [43e249a6-ac8b-4625-85ac-9ae54bcdeb04] Pending
helpers_test.go:344: "busybox-mount" [43e249a6-ac8b-4625-85ac-9ae54bcdeb04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [43e249a6-ac8b-4625-85ac-9ae54bcdeb04] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [43e249a6-ac8b-4625-85ac-9ae54bcdeb04] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003715708s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-080000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1337840899/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3719411143/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (58.757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3719411143/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh "sudo umount -f /mount-9p": exit status 1 (60.052292ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-080000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3719411143/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T" /mount1: exit status 1 (66.546875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0731 11:24:29.846227    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-080000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-080000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-080000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2550289574/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-080000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-080000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-080000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0731 11:26:45.980590    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
E0731 11:27:13.685635    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/addons-241000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-688000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m16.281091417s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-688000 -- rollout status deployment/busybox: (2.568675042s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-82sw2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-rtf9l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-v9k8q -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-82sw2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-rtf9l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-v9k8q -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-82sw2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-rtf9l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-v9k8q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-82sw2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-82sw2 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-rtf9l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-rtf9l -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-v9k8q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-688000 -- exec busybox-fc5497c4f-v9k8q -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr
E0731 11:28:47.079073    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.085698    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.097780    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.119844    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.161909    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.244004    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.405149    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:47.727245    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:48.367407    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:49.649529    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:28:52.211663    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-688000 -v=7 --alsologtostderr: (53.243644625s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp testdata/cp-test.txt ha-688000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile139678013/001/cp-test_ha-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000:/home/docker/cp-test.txt ha-688000-m02:/home/docker/cp-test_ha-688000_ha-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test_ha-688000_ha-688000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000:/home/docker/cp-test.txt ha-688000-m03:/home/docker/cp-test_ha-688000_ha-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test_ha-688000_ha-688000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000:/home/docker/cp-test.txt ha-688000-m04:/home/docker/cp-test_ha-688000_ha-688000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test_ha-688000_ha-688000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp testdata/cp-test.txt ha-688000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile139678013/001/cp-test_ha-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m02:/home/docker/cp-test.txt ha-688000:/home/docker/cp-test_ha-688000-m02_ha-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test_ha-688000-m02_ha-688000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m02:/home/docker/cp-test.txt ha-688000-m03:/home/docker/cp-test_ha-688000-m02_ha-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test_ha-688000-m02_ha-688000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m02:/home/docker/cp-test.txt ha-688000-m04:/home/docker/cp-test_ha-688000-m02_ha-688000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test_ha-688000-m02_ha-688000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp testdata/cp-test.txt ha-688000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile139678013/001/cp-test_ha-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m03:/home/docker/cp-test.txt ha-688000:/home/docker/cp-test_ha-688000-m03_ha-688000.txt
E0731 11:28:57.333561    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test_ha-688000-m03_ha-688000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m03:/home/docker/cp-test.txt ha-688000-m02:/home/docker/cp-test_ha-688000-m03_ha-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test_ha-688000-m03_ha-688000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m03:/home/docker/cp-test.txt ha-688000-m04:/home/docker/cp-test_ha-688000-m03_ha-688000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test_ha-688000-m03_ha-688000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp testdata/cp-test.txt ha-688000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile139678013/001/cp-test_ha-688000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m04:/home/docker/cp-test.txt ha-688000:/home/docker/cp-test_ha-688000-m04_ha-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000 "sudo cat /home/docker/cp-test_ha-688000-m04_ha-688000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m04:/home/docker/cp-test.txt ha-688000-m02:/home/docker/cp-test_ha-688000-m04_ha-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m02 "sudo cat /home/docker/cp-test_ha-688000-m04_ha-688000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 cp ha-688000-m04:/home/docker/cp-test.txt ha-688000-m03:/home/docker/cp-test_ha-688000-m04_ha-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-688000 ssh -n ha-688000-m03 "sudo cat /home/docker/cp-test_ha-688000-m04_ha-688000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0731 11:43:47.060199    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
E0731 11:45:10.125490    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.093806125s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-011000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-011000 --output=json --user=testUser: (1.922832541s)
--- PASS: TestJSONOutput/stop/Command (1.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-537000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-537000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.8815ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"728e24ea-015a-4d34-bdca-7b7ef77bc16d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-537000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19501bba-5387-4af8-8bb2-ad5df1270e39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19356"}}
	{"specversion":"1.0","id":"ac559dcb-b20b-4b4b-9987-eade9164627f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig"}}
	{"specversion":"1.0","id":"1817ad0b-ce28-4227-be8d-38c0945de9e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"41b353e2-62f3-48a4-b528-fbdb98449d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5bf83ca-c203-4600-ab5e-2fcac634a773","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube"}}
	{"specversion":"1.0","id":"39ee0b5e-ea50-4a53-bf02-acf31d061b76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4e4f66b6-068b-4516-a99d-68e39c8830c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-537000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-537000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-488000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.91575ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-488000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19356
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19356-1202/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19356-1202/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-488000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-488000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.559958ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-488000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.608549416s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.77220725s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-488000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-488000: (3.524031041s)
--- PASS: TestNoKubernetes/serial/Stop (3.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-488000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-488000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (36.617542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-488000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-488000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-532000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-195000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-195000 --alsologtostderr -v=3: (3.664972959s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-195000 -n old-k8s-version-195000: exit status 7 (49.216583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-195000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-762000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-762000 --alsologtostderr -v=3: (3.285157s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-762000 -n no-preload-762000: exit status 7 (49.032333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-762000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-941000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-941000 --alsologtostderr -v=3: (3.597646666s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-941000 -n embed-certs-941000: exit status 7 (56.292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-941000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-527000 --alsologtostderr -v=3
E0731 12:13:47.033798    1701 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19356-1202/.minikube/profiles/functional-080000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-527000 --alsologtostderr -v=3: (3.721872292s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-527000 -n default-k8s-diff-port-527000: exit status 7 (56.813166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-527000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-139000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-139000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-139000 --alsologtostderr -v=3: (3.265841833s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-139000 -n newest-cni-139000: exit status 7 (59.635292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-139000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-693000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-693000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-693000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-693000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-693000"

                                                
                                                
----------------------- debugLogs end: cilium-693000 [took: 2.247072375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-693000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-693000
--- SKIP: TestNetworkPlugins/group/cilium (2.35s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-427000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-427000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard